Overview
- Governor Gavin Newsom vetoed SB1047 last September, rejecting one-size-fits-all safety testing requirements for large AI models.
- The report warns that advances in foundation models raise risks of chemical, biological, radiological and nuclear threats and could cause severe, potentially irreversible harms without robust safeguards.
- Commissioned by Newsom, the 53-page California Report on Frontier Policy recommends legally protected whistleblower channels, incident reporting systems and public disclosures to enhance transparency.
- It calls for third-party risk assessments with safe-harbor provisions to enable independent evaluation of AI model capabilities beyond internal tests.
- The report argues that targeted state regulation could streamline compliance, counter a proposed 10-year moratorium and serve as a template for broader AI governance.