Overview
- New laws require companion chatbots to flag and respond to suicidal ideation, disclose that conversations are synthetic, provide minors with periodic break reminders, block sexually explicit outputs, and avoid presenting as health professionals, with reporting to the Department of Public Health.
- Operating system and app store providers must enable age‑verification signals to curb minors’ access to inappropriate or dangerous content.
- Social media platforms will carry warning labels for young users about harms linked to prolonged use, expanding California’s child‑safety measures online.
- Victims of AI deepfake pornography, including minors, can seek up to $250,000 per action against third parties that knowingly facilitate distribution of nonconsensual explicit material.
- The package directs the Department of Education to issue a cyberbullying model policy by June 1, 2026 for off‑campus incidents and clarifies that AI developers and users cannot avoid liability by claiming autonomous behavior, while SB 243’s signing came despite industry opposition and as a stricter chatbot bill (AB 1064) remains pending.