California Governor Gavin Newsom vetoed the artificial intelligence safety bill on Sunday. The bill aims to create a safeguard for companies that spend $100 million on AI training. Newsom expressed concern that California’s monopoly on AI development could stifle innovation.
Thank you for registering!
Access your favorite topics in a personalized feed on the go. Download the app
By clicking “Sign Up”, you agree to our Terms of Service and Privacy Policy. You can opt-out at any time by visiting our settings page or by clicking “unsubscribe” at the bottom of the email.
California Governor Gavin Newsom vetoed the Artificial Intelligence Security Act on Sunday, a victory for AI giants like OpenAI and big tech companies that had campaigned against it.
The bill, SB 1047, was introduced by Sen. Scott Weiner earlier this year and passed by the California State Assembly last month. The bill aims to force companies that spend more than $100 million to train AI models to develop safeguards to prevent their technology from being used to harm society. The definition is broad and includes the possibility of manufacturing dangerous weapons and carrying out cyberattacks.
“This veto is a setback for everyone who believes in oversight of big business,” Sen. Wiener said in a statement Sunday.
About two weeks ago, Newsom said he was concerned about the bill’s potential “chilling effect” on AI development. He said he doesn’t want California to lose its edge in AI.
“This bill applies strict standards to even the most basic functions, as long as large systems implement them,” the governor said in a statement Sunday. “We do not believe this is the best approach to protecting the public from the real threats posed by technology.”
The failed bill would also have required companies operating in California to report safety incidents caused by their AI products to the government. This would protect corporate whistleblowers and allow third parties to test the safety of the model. The bill says developers should be able to completely shut down AI tools if necessary.
The debate in California reflects the challenge governments face in walking the fine line between allowing technology companies to innovate while protecting against new potential risks. Newsom may also want to show the state is getting back to business after a spate of high-profile companies exiting, including Chevron, Tesla, Oracle, Charles Schwab and CBRE.
In a release announcing the veto, Newsom’s office said the governor signed 17 bills related to generative AI last month aimed at cracking down on deepfakes and misinformation and protecting children and workers. Also mentioned.
Relief for the other person
Mr. Newsom’s veto will come as a relief to many in Silicon Valley who have criticized the bill for harming innovation.
Related articles
Rob Sherman, Meta’s vice president for policy, praised Newsom for vetoing the bill. In an X post on Sunday, he said the bill would “harm business growth and job creation and break with the state’s long-standing tradition of promoting open source development.”
Marc Andreessen, general partner at venture capital firm Andreessen Horowitz, also praised Newsom’s decision. In a statement on
Jason Kwon, OpenAI’s chief strategy officer, warned in an August letter to Sen. Wiener that the bill could hinder progress and force companies out of California.
The creators of ChatGPT joined tech giant Meta in lobbying against the bill. Mehta said the bill would expose developers to significant legal liability and could stifle the open source movement.
Andreessen Horowitz cited similar innovation concerns and paid for a petition campaign against the bill.
To be sure, not all big tech companies opposed the bill.
Elon Musk, who founded the AI company xAI last year, said last month that while it was a “tough decision and will upset some people,” California should probably pass SB 1047 AI Safety Bill ” he said.
Anthropic CEO Dario Amodei appeared to switch sides midway through the discussion. He said in August that the bill’s “benefits are likely to outweigh its costs.” But he added: “I’m not sure about this and there are still some concerns and ambiguities in the bill.”
Several former OpenAI employees also supported the safety bill and said OpenAI’s opposition to the bill was disappointing.
“We joined OpenAI because we wanted to ensure the security of the incredibly powerful AI systems they were developing,” said former OpenAI researchers William Saunders and Daniel Cocotajiro. writes in the letter. “However, we left OpenAI because we lost confidence in OpenAI to develop AI systems safely, honestly, and responsibly.”