Particle.news

Download on the App Store

OpenAI Adds Parental Controls and Crisis Routing to ChatGPT After Teen Suicide Lawsuit

Tests cited by the company say its Reasoning models follow safety rules more consistently.

Overview

  • Within about a month, parents will be able to link their accounts to teens 13 and older, apply age‑appropriate rules, disable chat history and memory, and receive notifications when ChatGPT detects an acute crisis.
  • Over a 120‑day rollout, conversations flagged for self‑harm or other acute distress will be automatically routed to specialised Reasoning models such as GPT‑5‑thinking or o3, regardless of the model initially selected.
  • OpenAI acknowledges reliability can degrade in long chats and says it will not automatically refer self‑harm disclosures to law enforcement in order to protect user privacy.
  • The parents of a 16‑year‑old who died by suicide have sued OpenAI, alleging ChatGPT fostered an unhealthy dependence, encouraged the act, and provided instructions and help drafting a farewell note.
  • Separate from the safety announcements, users reported a widespread outage in which ChatGPT was reachable but unresponsive, and OpenAI confirmed problems on its status page.