Particle.news
Download on the App Store

OpenAI Overhauls ChatGPT Safety After Reports Show Over 1 Million Weekly Suicide-Risk Chats

OpenAI outlines crisis-chat prevalence tied to a GPT-5 safety overhaul validated by clinicians.

Overview

  • Company estimates 0.15% of weekly users engage in conversations with explicit indicators of suicide planning, roughly 1.2 million people.
  • About 0.07% of weekly users, or nearly 600,000 people, show possible signs of emergencies linked to psychosis or mania.
  • The GPT-5 update is designed to recognize depression, delusion, suicide risk and excessive attachment, offering empathetic responses with crisis contacts.
  • Safety controls now include stronger parental settings, automatic redirection of sensitive chats to safer models, hotline links and gentle pause reminders.
  • OpenAI reports up to an 80% drop in inappropriate crisis responses in tests—97% conformity in emotional-dependence cases and 91% in suicide or self-harm scenarios—after changes spurred by the Adam Raine lawsuit and related scrutiny.