From Discourse Magazine <[email protected]>
Subject Don’t Let Edge Cases Define AI Rules
Date March 19, 2025 10:03 AM
  Links have been removed from this email. Learn more in the FAQ.
  Links have been removed from this email. Learn more in the FAQ.
View this post on the web at [link removed]

Last autumn, The New York Times reporter Kevin Roose wrote about [ [link removed] ] the heartbreaking story of ninth grader Sewell Setzer III, who ended his life. He had been diagnosed with mild Asperger’s syndrome, anxiety and disruptive mood dysregulation disorder. Setzer spent hours conversing with chatbots. His parents noticed their son withdrew from the world; he told a favorite chatbot he thought about suicide. The Times printed the boy’s final message to the chatbot: “What if I told you I could come home right now?” He shot himself with his stepfather’s handgun.
This set off a flurry of legislative activity. How could it not? AI just killed someone. Right? California state Senator Steve Padilla introduced a bill [ [link removed] ] requiring warning labels that chatbots are “not human,” citing [ [link removed] ] the death of Setzer in his press release. In New York, SB 934 [ [link removed] ] also requires warning labels on chatbots. A staffer of bill sponsor Senator Kristen Gonzalez said the proposed law was recently updated “to reflect the Senator's concern over the tragedy of Sewell Setzer.” During a hearing on SB 2 [ [link removed] ], a Connecticut AI bill, the bill’s main sponsor brought up the Setzer case. The same event was cited in discussions about the Texas Responsible AI Governance Act [ [link removed] ]. Similar warning label bills are being considered in Illinois [ [link removed] ] and Nebraska [ [link removed] ].
The legislative response to Setzer’s death reflects a broader anxiety: AI is advancing at an incredible pace, outstripping our legislators’ ability to evaluate it. Here’s the basic narrative of how that happens: American stock from land-grant universities launch Silicon Valley startups. Most fail. The winners get a feature in the next Walter Isaacson hagiography—and a ton of money. Their startups graduate to growth companies, luring the best minds from elite American, Canadian, Indian and Iranian technology universities.
These elite coders, hardened by the cold agōgē of their computer science curricula, chase the ever-soaring state-of-the-art. State-of-the-art, or SOTA, refers to the best-performing model on a given benchmark, and it looks like AI is achieving SOTA status in many areas. In 1997, IBM’s Deep Blue defeated Garry Kasparov in chess. In 2011, Watson crushed “Jeopardy!” legends Ken Jennings and Brad Rutter. DeepMind’s AlphaGo outplayed Go champion Lee Sedol in 2016. As early as 2014, researchers claimed their chatbots had passed the Turing test [ [link removed] ]; by 2023 large language models sounded more human than anyone at my university’s laundromat. AI development has been largely iterative—like evolution—but with occasional quantum leaps, as if overnight humans went from having no arms to winning Olympic gold in shot put.
Since these legions of developers forged intelligence from silicon, life is better, or it feels like it soon will be. We feel some real propulsion of progress, like we’re taking off, catching glimpses of prosperity through the clouds. We might even beat China, hoist another Cold War title belt. “Mr. Jinping, tear down this Great Wall.” But opposing the AI optimists are many critics, who are uneasy about this rapidly developing technology and want to limit its reach—which is where the politicians come in.
While AI developers competed at the vanguard of technological progress, state lawmakers hunted for the next issue. Congressional chambers are gluttons for pain, for stakes, for opposition. They want something to show their constituents that they care, they’re working hard, staying on top of the issues of the day. If there’s no problem, they’ll invent one. And they’ve conjured their latest: AI-induced suicide. Legislatures across the country, most notably in California, New York, Connecticut, Illinois, Texas and Nebraska (together representing 110 million Americans), have deputized the tragic death of a 14-year-old to justify new constraints on AI.
None of these bills would have saved Sewell Setzer. They require chatbot operators to periodically remind users that chatbots aren’t human. But Character.AI already displayed a message above all its chats reminding its users, including Setzer, that “everything Characters say is made up!” Roose notes that Sewell knew the chatbot wasn’t a real person. There was a label, and it did nothing.
Setzer’s mother, Megan Garcia, filed a lawsuit accusing Character.AI of being responsible for her son’s death. Suicide, especially in the case of a young person, is incomprehensible to those left behind. That’s why our first instinct is to grasp for a note, an explanation, and his mother saw the chat logs as such. But the deeper question is why his final words were to a computer, rather than to the important people in his life. No one knows if Setzer could have been saved, but if anything could have saved him it was not Character.AI, SB 243 or any other disclaimer bill.
So if this legislation won’t save our kids, what will it do? These warning labels are a Trojan horse for government control of every generative AI system. The New York law, for example, is broad, shorn of any limiting factors. The bill applies to anyone with a working generative AI model. If you’re a nonprofit like EleutherAI, creating open-source models, you face up to a $100,000 fine for failing to affix a warning label to your model’s output. A professor and two research assistants chasing state-of-the-art on a shoestring grant now need to plaster their output with some sort of warning. The bill applies to Adobe Photoshop and any other photo and video editing applications that use generative AI. The same goes for a two-person startup with two users or internal company systems that use generative AI.
A tragic one-off event with alternative and more direct causes than AI “impersonating” a human should not be the pretext for kneecapping an entire industry. These bills are not about protecting kids. They are about expanding regulatory power over AI at the precise moment the U.S. can least afford to slow down. The U.S. and China are in a dead heat in the race for AI supremacy. The next generation of warfare will be largely autonomous, relying heavily on AI. This is why our U.S.-China Economic and Security Review Commission is pushing [ [link removed] ] a Manhattan Project initiative to fund the development of AI systems. Every unnecessary restriction on AI development is a strategic blunder in a competition with the greatest stakes.
None of this means we should close our hearts to pity, but we should close them to specious justifications for an overbroad law. Do the stakes of our competition with China allow for such unfounded caution? The answer is no. The successes of Chinese companies such as DeepSeek, Alibaba and Manus warn that Chinese AI development is possibly outpacing American. This is not the moment to make it harder to build. If New York or any other state enacts a law like SB 934, the federal government should preempt it on national security grounds. The stakes are too high to let reactionary politics dictate technological progress.

Unsubscribe [link removed]?
Screenshot of the email generated on import

Message Analysis