Overview
- Large frontier AI developers must publicly post frameworks describing how they follow national, international and industry‑consensus standards for safety and security.
- Companies are required to report critical AI safety incidents to California’s Office of Emergency Services within 15 days, with anonymized, aggregated public reports beginning in 2027.
- The law strengthens whistleblower protections and empowers the state attorney general to seek civil penalties of up to $1 million per violation.
- SB 53 creates the CalCompute consortium within the Government Operations Agency to plan a public computing cluster and directs the Department of Technology to recommend annual updates.
- The rules target high‑end models using compute‑based thresholds and a defined “catastrophic risk,” and drew mixed reactions, including opposition from tech trade groups, an endorsement from Anthropic, cautious praise from Meta and lobbying against the bill by OpenAI.