California Governor Vetoes SB 1047, Citing Concerns Over AI Regulation Framework

Written by Jeremy Werner

Jeremy is an experienced journalist, skilled communicator, and constant learner with a passion for storytelling and a track record of crafting compelling narratives. He has a diverse background in broadcast journalism, AI, public relations, data science, and social media management.
Posted on 09/30/2024
In News

California Governor Gavin Newsom vetoed Senate Bill 1047 (SB 1047), a landmark piece of legislation that aimed to impose strict safety and security regulations on the development of advanced artificial intelligence (AI) systems. Introduced by Senator Scott Wiener, the “Safe and Secure Innovation for Frontier Artificial Intelligence Models Act” was intended to regulate powerful AI models, particularly those with the potential to pose significant risks to public safety.

 

The bill had garnered widespread support from AI researchers, including renowned figures such as Geoffrey Hinton and Yoshua Bengio, and had passed through the California Senate, making its way to Governor Newsom’s desk for final approval. However, in his veto message, the Governor outlined his concerns that the bill’s approach to AI regulation was overly narrow and could hinder innovation in the AI sector.

 

Newsom Warns Against Overly Narrow Regulation

 

Governor Newsom acknowledged the need to regulate AI but questioned SB 1047’s design. “SB 1047 magnified the conversation about threats that could emerge from the deployment of AI,” Newsom wrote. However, he said the bill’s focus on model size and training cost—requiring oversight for systems costing more than $100 million to develop—could create a “false sense of security.”

He warned that such limits might ignore smaller AI systems capable of causing equal or greater harm. Newsom also stressed that regulation must adapt as technology evolves. “Adaptability is critical as we race to regulate a technology still in its infancy,” he stated. The governor argued that any framework should address the actual risks an AI system poses rather than its scale or price tag.

 

Concerns About Innovation and Economic Impact

 

Newsom cautioned that SB 1047 might overregulate even basic AI tools if they appeared in larger systems. He said this approach could slow innovation, especially in California, home to 32 of the world’s 50 leading AI companies. “While well-intentioned, SB 1047 does not take into account whether an AI system is deployed in high-risk environments, involves critical decision-making, or uses sensitive data,” Newsom explained. Applying strict rules across the board, he argued, could restrict the same innovation that drives progress and benefits the public.

 

California’s Broader AI Oversight Efforts

 

Despite rejecting the bill, Newsom reaffirmed his commitment to responsible AI regulation. He highlighted ongoing efforts under his September 2023 Executive Order, which directed state agencies to assess AI threats and vulnerabilities affecting California’s critical infrastructure. The governor also noted that he had recently signed more than a dozen AI-related bills targeting specific risks. He pointed to federal progress, including initiatives from the U.S. AI Safety Institute under the National Institute of Standards and Technology (NIST), as valuable models for risk management. Newsom said he plans to continue collaborating with lawmakers, federal agencies, and AI experts to create a balanced, science-based regulatory framework.

 

Supporters and Critics React

 

SB 1047 was seen as a pioneering attempt to regulate AI systems with heavy computational demands. Supporters praised its focus on public safety while maintaining California’s leadership in innovation. Senator Wiener, the bill’s sponsor, said it was vital for the state to take proactive steps to manage AI risks given its global role in technology development. However, critics, including former Speaker of the House Nancy Pelosi, said the bill failed to capture the complexity of AI regulation. Pelosi called the proposal “well-intentioned but ill-informed,” arguing that California needs a more flexible and informed approach to AI oversight.

 

 

Need Help?

 

If you’re wondering how California’s AI laws—or any global AI regulations—might affect you, reach out to BABL AI. Their Audit Experts are ready to provide valuable assistance while answering your questions and concern

Subscribe to our Newsletter

Keep up with the latest on BABL AI, AI Auditing and
AI Governance News by subscribing to our news letter