Overview
- Parents will be able to link accounts with teens, apply age-appropriate response rules by default, disable memory and chat history, and receive alerts when acute distress is detected.
- Some sensitive chats, including signs of acute distress, will be routed to reasoning models such as GPT-5-thinking to apply safety guidelines more consistently.
- OpenAI says human reviewers may refer imminent threats of serious physical harm to others to law enforcement, and it does not routinely report self-harm to police while directing users to crisis hotlines.
- The company is expanding its Expert Council on Well-Being and working with a Global Physician Network of more than 250 clinicians to guide and evaluate safeguards.
- The rollout follows lawsuits and reports alleging ChatGPT contributed to self-harm and suicide, including a wrongful-death suit by the Raine family, as critics argue the measures are vague and insufficient.