Overview
- OpenAI says parents will be able to link accounts with their teens within a month, manage how the bot responds, disable memory and chat history, and receive alerts if the system detects acute distress.
- OpenAI will start routing conversations that show signs of acute distress to higher-assurance reasoning models such as GPT-5-thinking, part of a 120-day push guided by clinicians and an expert council.
- Meta says it is training its AIs not to engage teens on self-harm, suicide and disordered eating, directing them to expert resources and limiting teen access to certain AI characters.
- OpenAI states it does not refer self-harm cases to law enforcement for privacy reasons but may escalate conversations that pose an imminent threat of serious physical harm to others.
- Researchers report inconsistent suicide-response behavior across major chatbots, and child-safety advocates are pressing for independent benchmarks, clinical testing and enforceable standards.