Last week (Aug. 28), California’s State Assembly passed the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act (SB 1047) with a 41-9 vote, marking one of the first significant legal frameworks in the U.S. to regulate A.I. The bill mandates that A.I. companies operating in California implement several safety measures while training sophisticated A.I. models and releasing them. These include immediately shutting down a model if necessary, protecting it from “unsafe post-training modifications,” and maintaining testing procedures to assess whether a model poses a risk of “causing or enabling a critical harm.”
The bill has ignited intense debate across Silicon Valley. While some industry leaders, including Elon Musk, see it as necessary to ensure safe A.I. development, others worry the act could hinder innovation. Major A.I. companies like OpenAI and Anthropic, as well as prominent political figures like Nancy Pelosi and Zoe Lofgren, have argued that the bill’s focus on catastrophic harms could disproportionately affect small, open-source A.I. developers.
“The requirements will mean that investors in some A.I. startups will have a portion of their investments spent on regulatory compliance rather than on developing the technology,” Jamie Nafziger, an international data privacy attorney, told Observer. “It would be better to define the harms about which we are concerned and have law enforcement and regulators control all market participants rather than running liability and control through the model developers.”
Critics also take issue with the bill primarily targeting a narrow category of A.I. models, such as large frontier models that require over $100 million to train or surpass a high computing power threshold of 10^26 FLOPS (floating point operations, a way of measuring computation). However, the legislature does not define how to calculate the training costs needed to assess whether the financial thresholds would be met. This ambiguity will likely lead to increased costs for model developers and gaming of the numbers to avoid compliance.
“It will certainly stop the distribution of open-source A.I. platforms, which will kill the entire A.I. ecosystem, not just startups, but also academic research,” Yann LeCun, Meta’s chief A.I. scientist, wrote in an X post in June. Likewise, in a June letter authored by the startup incubator Y Combinator, 140 A.I. startup founders voiced concerns that SB 1047 would severely impact California’s ability to retain A.I. talent and remain a hub for A.I. innovation.
“If California stands alone, it may make A.I. model developers want to leave the state,” Nafziger added. “Model developers have a lot of responsibilities for downstream potential uses of their models under this law, and it will complicate the open-source world significantly.”
In response to criticism, SB 1047 underwent several amendments before last week’s passing, including removing criminal penalties for perjury, establishing a “Board of Frontier Models” to safeguard startups’ ability to modify open-source A.I. models and narrowing pre-harm enforcement.
“In our assessment, the new SB 1047 is substantially improved, to the point where we believe its benefits likely outweigh its costs,” Anthropic co-founder and CEO Dario Amodei in a letter sent to California Gov. Gavin Newsom on Aug. 21. “We would urge the government to maintain a laser focus on catastrophic risks, and to resist the temptation to commandeer SB 1047’s provisions to accomplish unrelated goals.”
Senator Scott Wiener, the bill’s author, argues that SB 1047 is a “highly reasonable bill” that presents a balanced approach, reflecting both the potential dangers of A.I. models and the tech industry’s existing commitments. “We’ve worked hard all year, with open-source advocates, Anthropic, and others, to refine and improve the bill,” Wiener wrote in a blog post on Aug. 21. “SB 1047 is well calibrated to what we know about foreseeable A.I. risks, and it deserves to be enacted.” The bill now awaits Newsom’s signature to become state law.
(Except for the headline, this story has not been edited by PostX News and is published from a syndicated feed.)