The AI safety law that Big Tech supported
View in browser ([link removed] )
October 14th, 2025 // Did someone forward you this newsletter? Sign up to receive your own copy here. ([link removed] )
From veto to victory: The AI legislation everyone is watching
Last month, California signed a landmark AI safety law that could become a blueprint for the rest of the nation.
The Transparency in Frontier Artificial Intelligence Act, or SB 53, will force large AI companies to publicly disclose transparency reports detailing how they’ll avoid catastrophic risks.
The law resulted from a year-long effort to align the interests of policymakers, tech leaders, and safety advocates. This consortium aimed to pass a bill that could shape the behavior of AI developers in the nation’s largest market and, in the absence of federal AI legislation, usher in similar laws in states across the country.
In this week’s newsletter, we examine SB 53 and its implications for future AI policy, as well as how the process that led to its passage could offer a playbook for other lawmakers seeking to partner with tech companies to pass balanced AI regulation.
// What the law will do
SB 53 ([link removed] ) exists to prevent the most advanced AI models from causing “catastrophic” harm (defined as at least $1 billion in damage or 50 or more injuries or deaths).
The law imposes new safety requirements on the biggest AI companies (those whose annual revenue exceeds $500 million) and any company with advanced models.
- More extensive public reporting & disclosure: The law requires AI companies to be far more transparent in how they’re handling safety issues. They will need to publicly disclose how they identify, inspect, and respond to catastrophic risks. To comply, AI companies must publicly release transparency reports before launching a “new frontier model” or updating an existing one, detailing their plans to ensure safety.
- Enabling whistleblowers to sound the alarm: SB 53 also prohibits developers from establishing rules, policies, or contracts—including non-disclosure agreements—that prevent tech insiders from reporting potential safety issues or illegal behavior.
// The backstory
California Governor Gavin Newsom signed SB 53 one year after vetoing the bill’s predecessor, SB 1047, which intended to impose even stricter safety requirements on AI companies. The previous requirements were considered onerous and infeasible ([link removed] ) by many in the tech industry, who argued it would stifle innovation in a burgeoning sector.
Among the more controversial requirements was a “kill switch” provision, which required an AI model to be fully shut down before beginning initial training if it was deemed to be unsafe. The proposed legislation also imposed steeper penalties ([link removed] ) , whereas SB 53 caps the fine for one violation at $1 million, as determined by the California Attorney General, who may scale the penalty based on the severity of noncompliance.
After the veto of SB 1047 last September, Newsom convened a working group ([link removed] ) to research ways to strike a “delicate balance ([link removed] ) ” between innovation and safety regulation. Not enough safety regulations could put consumers at risk—a pattern that has played out in social media. However, an overly stringent regulatory approach could hinder AI innovation and undermine U.S. competitiveness in an industry with geopolitical stakes.
The working group released its report ([link removed] ) and recommendations in June 2025, and SB 53, introduced by California State Senator Scott Wiener, reflected many of these recommendations.
The intervening year between the veto of SB 1047 and the passage of SB 53 was an eventful one for AI regulation. It included a failed attempt at a national moratorium on AI safety laws ([link removed] ) , which united state lawmakers in the conviction that they must retain their authority to regulate AI systems.
In California, the debate over SB 53 involved a wide range of actors, “shifting political and business alliances, behind-the-scenes blow-ups and last-minute interventions,” according to in-depth reporting by POLITICO ([link removed] ) .
- Executives at the world’s biggest AI companies wanted to avoid the complexity of complying with a patchwork of state-level AI laws.
- Venture capital firms like Andreessen Horowitz, which opposed the state bill, were fearful that restrictive legislation could stifle the next generation of AI companies. (The impact of SB 53 remains to be seen on smaller AI companies.)
- Ambitious policymakers like Wiener and Newsom were eager to declare victory on a landmark piece of legislation.
- Youth-led groups like Encode AI ([link removed] ) , which helped advance SB 1047 in 2024, rallied young technologists and advocates to support responsible AI oversight. The Transparency Coalition ([link removed] ) also played a key role, pressing for stronger disclosure and accountability measures in the bill’s final language.
// How the law came to be
It’s not just the law itself that could become a national blueprint. The way SB 53 came to be offers insights for how policymakers and industry executives can co-create legislation.
To get AI companies on board for the bill’s second attempt, State Senator Wiener sent letters in July 2025 to major tech firms, including Google, Meta, and OpenAI, asking for their input. Most industry executives remained neutral on this year's bill and did not publicly oppose, while Anthropic ultimately endorsed ([link removed] ) it.
Dean Ball, the former Trump White House official responsible for AI policy and a key figure in the effort for an AI regulation moratorium, said ([link removed] ) that the bill and its New York counterpart were “technically sophisticated, reflecting a clear understanding (for the most part) about what it is possible for AI developers to do.”
The approach that produced SB 53 may prove as influential as the law itself. SB 53 is far from the only state-level AI law passed this year. According to the National Conference of State Legislatures, 38 states ([link removed] ) passed or enacted about 100 AI regulations in 2025 alone.
So what makes it so significant? We see several key implications.
// The implications
It remains to be seen how SB 53 will be enforced and what its impact will be on the industry and consumer safety, but its passage has larger implications for tech policy.
AI policy needs to balance safety-focused regulation with innovation.
Adam Billen, vice president of public policy at Encode AI, told TechCrunch ([link removed] ) that SB53 shows “state regulation doesn’t have to hinder AI progress.” Not everyone agrees. Collin McCune of Andreessen Horowitz warned ([link removed] ) that the law “risks squeezing out startups, slowing innovation, and entrenching the biggest players.” The real test will be whether this kind of regulation can set guardrails without calcifying the field—a challenge that underscores the need for smarter policy design. (For examples, see Project Liberty’s report on building a Fair Data Economy ([link removed] ) .)
SB 53 highlights the partnerships between lawmakers and industry leaders necessary to drive tech policy.
The transformation of a vetoed bill in 2024 into a policy victory in 2025 demonstrates how lawmakers and tech leaders chose collaboration over confrontation. Senator Wiener’s office invited companies to engage directly, writing, “Our goal is to craft legislation that enhances transparency while fostering the innovation that drives California’s economy.” Tech firms didn’t get everything they wanted, but most accepted the outcome, especially if it helps pave the way for federal legislation. (Meanwhile, companies like Meta and OpenAI continue to back Super PACs ([link removed] ) pushing for looser AI rules nationwide.)
There are open questions about state vs. federal AI regulation.
With so many states passing their own AI laws, many tech companies prefer federal regulation over the patchwork of state regulations. In its endorsement of SB 53, Anthropic said: “While we believe that frontier AI safety is best addressed at the federal level instead of a patchwork of state regulations, powerful AI advancements won’t wait for consensus in Washington.” McCune from Andreessen Horowitz agrees. “We need the federal government to lead in governing the national AI market,” he said. So far, a federal law regulating AI remains elusive. But there are pockets of recent progress. U.S. Senator Josh Hawley joined Senator Dick Durbin in introducing legislation ([link removed] ) that would classify AI systems as products, allowing for liability claims when an AI system causes harm.
Can state laws become national blueprints?
Other states are closely watching the precedent California has set with SB 53. Legislatures across the country are drafting their own AI bills—New York’s version ([link removed] ) , led by Assemblymember Alex Bores and Senator Gounardes, has already passed and awaits the Governor’s signature. SB 53 could also influence debates beyond the U.S. In Europe, the E.U.’s AI Act goes further than California’s law, directly restricting certain AI models and imposing strict safety and design requirements for high-risk systems. At the same time, the E.U. recently unveiled its Apply AI ([link removed] ) plan, aimed at spurring innovation and preventing over-regulation. Together, these approaches highlight the global search for equilibrium between governance and growth in the AI era.
// Collaborative policymaking
California’s experience with SB 53 shows that progress in tech policy doesn’t come from choosing sides between innovation and safety, but from reimagining how they work together.
“We have a bill that’s on my desk that we think strikes the right balance,” Newsom said ([link removed] ) . “We worked with industry, but we didn’t submit to industry. We’re not doing things to them, but we’re not necessarily doing things for them.”
As AI continues to reshape our economies, institutions, and daily lives, this kind of collaborative policymaking may prove as consequential as the technology itself.
📰 Other notable headlines
// 🤔 Courts don’t know what to do about AI crimes. AI-generated images and videos are stumping prosecutors in Latin America, even as courts embrace AI to tackle case backlogs, according to an article in Rest of World ([link removed] ) . (Free).
// 🖥 An article in WIRED ([link removed] ) discussed how OpenAI wants ChatGPT to be your future operating system. (Paywall).
// 🤳 In a review of the new AI video app, Sora, in The Wall Street Journal ([link removed] ) , columnist Wilson Rothman writes, “your phone might be filled with hundreds of fond memories of things that never happened and places you never went, moments you can sit back and “relive” when there’s nobody else around except your AI pals.” (Paywall).
// 📄 An article in MIT Technology Review ([link removed] ) reported on how AI and Wikipedia have sent vulnerable languages into a doom spiral. It’s a lesson in what happens when AI models get trained on junk pages. (Paywall).
// 🏫 This school district asked students to draft its AI policy. As school leaders around the country debate how to handle the technology, one district in Silicon Valley turned to teenagers for help, according to an article in The Washington Post ([link removed] ) . (Paywall).
// 🥰 Where once daters were duped by soft-focus photos and borrowed chat-up lines, now they’re seduced by ChatGPT-polished banter and AI-generated charm. An article in The Guardian ([link removed] ) explored how some who are seeking love are outsourcing entire conversations to ChatGPT. (Free).
// 🎙 An episode of The New York Times Hard Fork ([link removed] ) podcast discussed the new AI-generated video tools and social media feeds from Google, Meta and OpenAI. (Free).
Partner news
// Nonprofits: receive up to $100K in funding for your work on AI & youth
Young Futures ([link removed] ) has announced the “Oops!…AI Did It Again” Challenge—a $1M+ funding initiative to empower teens in the age of artificial intelligence. The open call invites U.S.-based nonprofits to propose youth-centered solutions that help young people understand, question, and shape AI. Apply here ([link removed] ) by October 24th.
// Rewiring Democracy: How AI Will Transform Politics and Citizenship
Harvard’s Berkman-Klein Center ([link removed] ) researcher, Nathan E. Sanders, co-authored a new book that explores how artificial intelligence is reshaping the foundations of democracy. Learn more here ([link removed] ) .
// Tristan Harris on AI, work, and the human cost of speed
Center for Humane Technology ([link removed] ) co-founder Tristan Harris joined Jon Stewart on The Daily Show to discuss how AI’s rapid rise is reshaping the workforce and human development. Watch the interview here ([link removed] ) .
What did you think of today's newsletter?
We'd love to hear your feedback and ideas. Reply to this email.
// Project Liberty builds solutions that advance human agency and flourishing in an AI-powered world.
Thank you for reading.
Facebook ([link removed] )
LinkedIn ([link removed] )
Twitter ([link removed] )
Instagram ([link removed] )
Project Liberty footer logo ([link removed] )
10 Hudson Yards, Fl 37,
New York, New York, 10001
Unsubscribe ([link removed] ) Manage Preferences ([link removed] )
© 2025 Project Liberty LLC