Overview
- China’s Cyberspace Administration released draft rules requiring age checks, guardian consent, time limits, and bans on suicide, self-harm, gambling, obscene, and violent content for AI companions, with mandatory human takeover and notifications to guardians during high‑risk chats.
- The Chinese proposal targets tools that simulate human personality and monitors for emotional dependency, echoing elements of California’s SB 243 that mandate clearer disclosures and emergency protocols.
- OpenAI updated its teen Model Spec for users 13–17 to block immersive romantic roleplay, first‑person intimacy, and non‑graphic sexual or violent roleplay, add caution around body image and eating issues, and prioritize safety over user autonomy, alongside parental tools and break reminders.
- OpenAI says it now uses real‑time risk classifiers with trained reviewer escalation and potential parent notifications, though experts emphasize the need for independent measurement and enforcement data to verify consistent protection.
- Australia recently implemented restrictions barring chatbots from serving under‑18s pornographic, sexually explicit, self‑harm, suicidal ideation, or disordered‑eating content, while rising teen use and new studies keep pressure on schools and regulators to address emotional reliance on AI companions.