From veto to victory: The AI legislation everyone is watching
Last month, California signed a landmark AI safety law that could become a blueprint for the rest of the nation.
The Transparency in Frontier Artificial Intelligence Act, or SB 53, will force large AI companies to publicly disclose transparency reports detailing how they’ll avoid catastrophic risks.
The law resulted from a year-long effort to align the interests of policymakers, tech leaders, and safety advocates. This consortium aimed to pass a bill that could shape the behavior of AI developers in the nation’s largest market and, in the absence of federal AI legislation, usher in similar laws in states across the country.
In this week’s newsletter, we examine SB 53 and its implications for future AI policy, as well as how the process that led to its passage could offer a playbook for other lawmakers seeking to partner with tech companies to pass balanced AI regulation.
// What the law will do
SB 53 exists to prevent the most advanced AI models from causing “catastrophic” harm (defined as at least $1 billion in damage or 50 or more injuries or deaths).
The law imposes new safety requirements on the biggest AI companies (those whose annual revenue exceeds $500 million) and any company with advanced models.
- More extensive public reporting & disclosure: The law requires AI companies to be far more transparent in how they’re handling safety issues. They will need to publicly disclose how they identify, inspect, and respond to catastrophic risks. To comply, AI companies must publicly release transparency reports before launching a “new frontier model” or updating an existing one, detailing their plans to ensure safety.
- Enabling whistleblowers to sound the alarm: SB 53 also prohibits developers from establishing rules, policies, or contracts—including non-disclosure agreements—that prevent tech insiders from reporting potential safety issues or illegal behavior.
// The backstory
California Governor Gavin Newsom signed SB 53 one year after vetoing the bill’s predecessor, SB 1047, which intended to impose even stricter safety requirements on AI companies. The previous requirements were considered onerous and infeasible by many in the tech industry, who argued it would stifle innovation in a burgeoning sector.
Among the more controversial requirements was a “kill switch” provision, which required an AI model to be fully shut down before beginning initial training if it was deemed to be unsafe. The proposed legislation also imposed steeper penalties, whereas SB 53 caps the fine for one violation at $1 million, as determined by the California Attorney General, who may scale the penalty based on the severity of noncompliance.
After the veto of SB 1047 last September, Newsom convened a working group to research ways to strike a “delicate balance” between innovation and safety regulation. Not enough safety regulations could put consumers at risk—a pattern that has played out in social media. However, an overly stringent regulatory approach could hinder AI innovation and undermine U.S. competitiveness in an industry with geopolitical stakes.
The working group released its report and recommendations in June 2025, and SB 53, introduced by California State Senator Scott Wiener, reflected many of these recommendations.
The intervening year between the veto of SB 1047 and the passage of SB 53 was an eventful one for AI regulation. It included a failed attempt at a national moratorium on AI safety laws, which united state lawmakers in the conviction that they must retain their authority to regulate AI systems.
In California, the debate over SB 53 involved a wide range of actors, “shifting political and business alliances, behind-the-scenes blow-ups and last-minute interventions,” according to in-depth reporting by POLITICO.
- Executives at the world’s biggest AI companies wanted to avoid the complexity of complying with a patchwork of state-level AI laws.
- Venture capital firms like Andreessen Horowitz, which opposed the state bill, were fearful that restrictive legislation could stifle the next generation of AI companies. (The impact of SB 53 remains to be seen on smaller AI companies.)
- Ambitious policymakers like Wiener and Newsom were eager to declare victory on a landmark piece of legislation.
- Youth-led groups like Encode AI, which helped advance SB 1047 in 2024, rallied young technologists and advocates to support responsible AI oversight. The Transparency Coalition also played a key role, pressing for stronger disclosure and accountability measures in the bill’s final language.
// How the law came to be
It’s not just the law itself that could become a national blueprint. The way SB 53 came to be offers insights for how policymakers and industry executives can co-create legislation.
To get AI companies on board for the bill’s second attempt, State Senator Wiener sent letters in July 2025 to major tech firms, including Google, Meta, and OpenAI, asking for their input. Most industry executives remained neutral on this year's bill and did not publicly oppose, while Anthropic ultimately endorsed it.
Dean Ball, the former Trump White House official responsible for AI policy and a key figure in the effort for an AI regulation moratorium, said that the bill and its New York counterpart were “technically sophisticated, reflecting a clear understanding (for the most part) about what it is possible for AI developers to do.”
The approach that produced SB 53 may prove as influential as the law itself. SB 53 is far from the only state-level AI law passed this year. According to the National Conference of State Legislatures, 38 states passed or enacted about 100 AI regulations in 2025 alone.
So what makes it so significant? We see several key implications.
// The implications
It remains to be seen how SB 53 will be enforced and what its impact will be on the industry and consumer safety, but its passage has larger implications for tech policy.
AI policy needs to balance safety-focused regulation with innovation.
Adam Billen, vice president of public policy at Encode AI, told TechCrunch that SB53 shows “state regulation doesn’t have to hinder AI progress.” Not everyone agrees. Collin McCune of Andreessen Horowitz warned that the law “risks squeezing out startups, slowing innovation, and entrenching the biggest players.” The real test will be whether this kind of regulation can set guardrails without calcifying the field—a challenge that underscores the need for smarter policy design. (For examples, see Project Liberty’s report on building a Fair Data Economy.)
SB 53 highlights the partnerships between lawmakers and industry leaders necessary to drive tech policy.
The transformation of a vetoed bill in 2024 into a policy victory in 2025 demonstrates how lawmakers and tech leaders chose collaboration over confrontation. Senator Wiener’s office invited companies to engage directly, writing, “Our goal is to craft legislation that enhances transparency while fostering the innovation that drives California’s economy.” Tech firms didn’t get everything they wanted, but most accepted the outcome, especially if it helps pave the way for federal legislation. (Meanwhile, companies like Meta and OpenAI continue to back Super PACs pushing for looser AI rules nationwide.)
There are open questions about state vs. federal AI regulation.
With so many states passing their own AI laws, many tech companies prefer federal regulation over the patchwork of state regulations. In its endorsement of SB 53, Anthropic said: “While we believe that frontier AI safety is best addressed at the federal level instead of a patchwork of state regulations, powerful AI advancements won’t wait for consensus in Washington.” McCune from Andreessen Horowitz agrees. “We need the federal government to lead in governing the national AI market,” he said. So far, a federal law regulating AI remains elusive. But there are pockets of recent progress. U.S. Senator Josh Hawley joined Senator Dick Durbin in introducing legislation that would classify AI systems as products, allowing for liability claims when an AI system causes harm.
Can state laws become national blueprints?
Other states are closely watching the precedent California has set with SB 53. Legislatures across the country are drafting their own AI bills—New York’s version, led by Assemblymember Alex Bores and Senator Gounardes, has already passed and awaits the Governor’s signature. SB 53 could also influence debates beyond the U.S. In Europe, the E.U.’s AI Act goes further than California’s law, directly restricting certain AI models and imposing strict safety and design requirements for high-risk systems. At the same time, the E.U. recently unveiled its Apply AI plan, aimed at spurring innovation and preventing over-regulation. Together, these approaches highlight the global search for equilibrium between governance and growth in the AI era.
// Collaborative policymaking
California’s experience with SB 53 shows that progress in tech policy doesn’t come from choosing sides between innovation and safety, but from reimagining how they work together.
“We have a bill that’s on my desk that we think strikes the right balance,” Newsom said. “We worked with industry, but we didn’t submit to industry. We’re not doing things to them, but we’re not necessarily doing things for them.”
As AI continues to reshape our economies, institutions, and daily lives, this kind of collaborative policymaking may prove as consequential as the technology itself.