Particle.news
Download on the App Store

California's AI Safety Law Takes Effect Jan. 1, Forcing Big Developers to Disclose Disaster Plans and Report Incidents

State regulators are filling a federal gap on AI oversight.

Overview

  • Companies building large AI models must notify California of any critical safety incident within 15 days, or within 24 hours if the threat is imminent, with fines up to $1 million per violation.
  • Developers are required to post public frameworks explaining intended uses, user restrictions, catastrophic‑risk assessments, incident response plans, and whether an independent third party reviewed those efforts.
  • Employees who assess critical safety risks receive whistleblower protections under the new law.
  • Catastrophic risk is defined as scenarios involving over 50 deaths, chemical or biological or radiological or nuclear harm, or more than $1 billion in damage, and the law applies only to developers with at least $500 million in annual revenue while excluding many government‑used systems.
  • Incident reports go to the state Office of Emergency Services, remain confidential and may be redacted as trade secrets, and an anonymized public summary begins in 2027, as New York credits California's approach in its own new law.