On September 29, 2024, California Governor Gavin Newsom vetoed Senate Bill 1047, which would have enacted the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act (the “Act”) to create a comprehensive regulatory framework for the development of artificial intelligence models. The veto embodies the dilemma that has emerged around the regulation of AI applications: how can laws prevent harms in the use and development of AI, while promoting innovation and harnessing the power of new technologies to affect positive change?

The Safe and Secure Innovation for Frontier Artificial Intelligence Models Act at a Glance

Although the Act sought to follow the EU AI Act in establishing a comprehensive regulatory framework, it eschewed the AI Act’s risk-based approach to AI regulation. Instead, the Act would have applied to “covered models,” which are defined as AI models that exceed thresholds in the computing power and costs associated with their training.

The Act would have imposed certain obligations and restrictions on covered models, including:

  • The implementation of administrative, technical, and physical cybersecurity measures to protect against unauthorized access, misuse, or post-training modifications
  • The implementation of full-shutdown capabilities
  • A written safety and security protocol, along with the designation of senior personnel to ensure implementations of the protocol as written, the retention of the protocol for as long as the model is made available (plus five years), and the publication and filing of the protocol with the California attorney general
  • Assessment of whether a covered model is capable of causing or enabling material harm
  • Not making covered model available for commercial or public use if there is a risk that the model will create or enable a critical harm
  • Undertaking and retaining annual third-party audits of covered models
  • The submission of a statement of compliance to California’s attorney general
  • Reporting safety incidents to the attorney general

The Act would have provided for attorney general enforcement, with monetary penalties and injunctive relief available for violations.

Although Senate Bill 1047 will not go into law, there are federal legislative efforts underway that will regulate AI developers and cloud providers.  For example, U.S. Department of Commerce’s Bureau of Industry and Security recently released a Notice of Proposed Rulemaking that will impose cyber reporting obligations on frontier AI developers and compute providers.

In Context

The veto of the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act is the culmination of a busy legislative period for AI regulation, with Governor Newsom signing eighteen new laws relating to AI in the preceding 30 days. The new California legislation runs the gamut of subject areas, from laws requiring the disclosure of AI tools used in political advertisements to the establishment of a commission to consider the inclusion of AI literacy in California schools.

While this frenzy of legislative activity suggests a willingness to regulate AI, the refusal to enact a comprehensive regulatory framework in the style of the European Union’s AI Act or Colorado’s recent AI law is significant. The Act attracted opposition from California Representatives Nancy Pelosi and Zoe Lofgren, AI thought leaders like Professor Fei-Fei Li, as well as California organizations leading AI development. The criticism, many of which were reflected in Governor Newsom’s veto message, noted the impact of the law on innovation, its failure to adopt a risk-based approach like the EU AI Act, and potential harms to the development and availability of open source AI models.

Despite the veto, Governor Newsom reiterated his commitment “to working with the Legislature, federal partners, technology experts, ethicists, and academia, to find the appropriate path forward, including legislation and regulation.” Organizations that develop and use AI should continue to monitor legislative developments as statehouses consider both comprehensive and use-specific proposals to regulate AI.

Author

Adam Aft helps global companies navigate the complex issues regarding intellectual property, data, and technology in product counseling, technology, and M&A transactions. He leads the Firm's North America Technology Transactions group and co-leads the group globally. Adam regularly advises a range of clients on transformational activities, including the intellectual property, data and data privacy, and technology aspects of mergers and acquisitions, new product and service initiatives, and new trends driving business such as platform development, data monetization, and artificial intelligence.

Author

Cynthia is an Intellectual Property Partner in Baker McKenzie's Palo Alto office. She advises clients across a wide range of industries including Technology, Media & Telecoms, Energy, Mining & Infrastructure, Healthcare & Life Sciences, and Industrials, Manufacturing & Transportation. Cynthia has deep experience in complex cross-border, IP, data-driven and digital transactions, creating bespoke agreements in novel technology fields.

Author

Brian provides advice on global data privacy, data protection, cybersecurity, digital media, direct marketing information management, and other legal and regulatory issues. He is Chair of Baker McKenzie's Global Data Privacy and Security group.

Author

Cristina focuses her practice on regulatory and transactional issues in global privacy and data protection, including data security, data breach notification, global privacy, website privacy policies, behavioral advertising, cross-border data transfers, and comprehensive compliance programs.

Author

Ella is an associate in our Firm's Intellectual Property & Technology Practice Group and is based in our San Francisco office.

Author

Justine focuses her practice on both proactive and reactive cybersecurity and data privacy services, representing clients in matters related to information governance, diligence in acquisitions and investments, incident preparedness and response, the California Consumer Privacy Act, privacy litigation, and cyber litigation.

Author

Garrett is an associate in Baker McKenzie's North America Intellectual Property Group and is based in our San Francisco office. His practice focuses on helping clients build effective information governance programs, comply with privacy laws and regulations, and respond to cybersecurity incidents.

Author

Avi Toltzis is a Knowledge Lawyer in Baker McKenzie's Chicago office.