Overview
- Within about a month, parents will be able to link accounts to their teens’ ChatGPT profiles, set age‑appropriate response rules, disable memory and chat history, and receive alerts when the system detects acute distress.
- Over the next 120 days, OpenAI will begin redirecting sensitive conversations to higher‑capability reasoning models it says more consistently follow safety guidance.
- OpenAI is expanding its Expert Council on Well‑Being and its global physician network to help evaluate mental‑health contexts and inform safeguards.
- The company says it does not refer self‑harm cases to law enforcement for privacy reasons, though human reviewers may contact authorities if there is an imminent threat of serious harm to others.
- The measures arrive as the Raine family’s wrongful‑death suit and other reports fuel criticism that the steps are insufficient, and as Meta restricts its chatbots from discussing self‑harm with teens and directs them to expert resources.