Particle.news

Download on the App Store

California Enacts First U.S. Frontier AI Safety Law Requiring Transparency From Big Developers

The move sets up California to influence federal policy, with state implementation now underway.

Overview

  • Gov. Gavin Newsom signed SB 53 on Sept. 29, targeting developers with more than $500 million in annual revenue that train “frontier” models exceeding 10^26 operations and tied to clearly defined catastrophic risk thresholds.
  • Covered companies must publish a documented safety framework and post pre-deployment transparency reports detailing capabilities, intended uses, risk assessments, and mitigations.
  • Developers must report critical safety incidents to California’s Office of Emergency Services within 15 days and alert an appropriate authority within 24 hours if imminent harm is identified.
  • The California attorney general can enforce the law with civil penalties of up to $1 million per violation for noncompliance or false statements.
  • The Act creates the CalCompute consortium to build a public cloud for safe AI research with a framework due by Jan. 1, 2027, as major AI firms voice cautious support and industry groups and the White House warn against a state-by-state patchwork.