Overview
- China’s draft targets AI that simulates human personalities and engages users emotionally across text, image, audio, and video.
- Services would have to disclose they are AI at login and display recurring notices during long sessions, including a reminder at the two-hour mark.
- Providers must monitor users’ emotional states, warn against excessive use, intervene on signs of dependency, and hand conversations to humans if self-harm is indicated, with intentionally addictive or relationship-replacing designs barred.
- AI behavior would need to align with “core socialist values,” with bans on content that threatens national security, spreads rumors, incites illegal religious activities, or promotes obscenity, violence, crime, libel, or manipulative persuasion.
- Users must be able to delete their histories and withhold training consent, while firms face lifecycle safety checks, pre-launch filings, and updates once a service reaches one million users, a burden larger incumbents are better positioned to meet.