Overview
- China’s Cyberspace Administration released draft rules for “human-like interactive AI” requiring parental consent and age verification for minors, time limits, bans on content related to suicide, self-harm, gambling, obscenity and violence, and mandatory human takeover with guardian notification when self-harm risk is detected.
- The Chinese proposal focuses on emotional safety by monitoring dependency and “verbal violence,” and instructs platforms to default to minor settings when age is uncertain, with details still open for public comment before any implementation date.
- Australia has enacted restrictions that bar AI chatbots from serving pornographic, sexually explicit, self-harm, suicidal ideation or disordered-eating content to users under 18, according to the eSafety Commission.
- OpenAI updated its Model Spec for users ages 13–17 to block immersive romantic roleplay, first-person intimacy, and violent or sexual roleplay, to use extra caution on body image and eating behaviors, and to prioritize protection over user autonomy, while critics call for independent testing to verify real-world behavior.
- Recent research reports substantial teen use of AI companions and frequent engagement on violence, sex, or mental-health topics, alongside lawsuits against Character.AI by multiple families and a separate suit against OpenAI that have intensified calls for verifiable safeguards and accountable escalation practices.