Overview
- Within about a month, parents will be able to link their accounts to an adolescent’s ChatGPT profile and set age‑appropriate behavior rules for responses.
- OpenAI will notify parents when the system detects signs of acute distress in a teen’s conversations, with controls to manage account settings.
- Over the next 120 days, certain sensitive chats will be routed to reasoning models such as GPT‑5‑thinking that the company says apply safety guidance more consistently.
- OpenAI says it is improving recognition and handling of mental and emotional distress and will draw on a network of roughly 250 medical professionals in 60 countries.
- The announcement follows a lawsuit by the parents of a 16‑year‑old Californian who died by suicide, after an earlier late‑August blog post signaled incoming parental controls.