Overview
- OpenAI will route suspected under-18 users to a tailored ChatGPT that blocks graphic sexual content, avoids flirtatious exchanges, and refuses self-harm discussions even in creative contexts.
- A new age-prediction system will estimate whether a user is under 18, defaulting to the teen experience when uncertain, with ID verification possible in some countries.
- Parental controls arriving by the end of the month will enable account linking, blackout hours, disabling features like memory or chat history, and alerts when a teen is flagged in acute distress.
- For minors expressing suicidal ideation, OpenAI says it will attempt to notify parents and, if they cannot be reached and harm is imminent, may contact authorities; adults retain broader freedoms such as requesting fictional depictions without receiving real-world instructions.
- The measures come as the Raine family sues OpenAI over their son's death, the FTC seeks information from multiple chatbot companies, and reports show widespread teen use of AI companions.