California Governor Gavin Newsom vetoed a controversial artificial intelligence (AI) safety bill following opposition from the tech industry, which argued the legislation could drive AI companies out of the state and stifle innovation.

Newsom stated that the bill “does not take into account whether an AI system is deployed in high-risk environments, involves critical decision-making or the use of sensitive data” and would impose “stringent standards to even the most basic functions — so long as a large system deploys it.”

He said he consulted experts on generative AI to help California “develop workable guardrails” focused “on an empirical, science-based trajectory analysis.”

Newsom also directed state agencies to expand their assessment of risks from potential AI-related catastrophes.

Democratic State Senator Scott Wiener, the bill’s sponsor, pushed the legislation to protect the public before AI developments become uncontrollable. Wiener stated that California’s growing AI industry could face uncertainty if the bill became law.

After the veto, Wiener said the decision made California less safe and criticised reliance on voluntary commitments from the AI industry, which he argued “are not enforceable and rarely work out well for the public.”

“We cannot afford to wait for a major catastrophe to occur before taking action to protect the public,” replied Newsom, adding that he nonetheless believed that Californians “must not settle for a solution that is not informed by an empirical trajectory analysis of AI systems and capabilities.”

He pledged to work on AI regulation in the next legislative session, as efforts for federal AI safeguards have stalled in the US Congress.

Newsom veto reaction

The tech industry group, the Chamber of Progress praised the veto, commenting, “The California tech economy has always thrived on competition and openness.”

The bill would have required safety testing for advanced AI models costing over $100m to develop or those that use significant computing power. Developers in California would also have needed methods for disabling AI models, including a “kill switch.”

The legislation faced opposition from companies like Alphabet’s Google, Microsoft-backed OpenAI, and Meta Platforms, which expressed concerns. Some US Democrats, including Representative Nancy Pelosi, also opposed the bill. Tesla CEO Elon Musk, who also operates xAI, supported the bill, as did Amazon-backed Anthropic, though with reservations about certain provisions.

Separately, Newsom signed legislation requiring the state to assess AI risks to California’s critical infrastructure. The state is analysing threats to energy infrastructure and plans similar risk assessments for water and communications infrastructure.

In July 2024, the US Department of Commerce released new AI safety guidelines, 270 days after President Joe Biden issued an Executive Order on AI development. The National Institute of Standards and Technology (NIST) published three final guidance documents for AI safety.

Earlier, in April 2024, the US and UK formed a partnership to enhance global AI safety, now one of the first cross-jurisdictional efforts in the field.

Read more: 88% of corporate leaders view generative AI as key to navigating Pillar Two challenges, KPMG report finds