Particle.news
Download on the App Store

OpenAI Says About 1 Million-Plus Weekly ChatGPT Users Express Suicidal Intent

OpenAI details safeguards following disclosure of mental‑health risk data.

Overview

  • The company reports that 0.15% of roughly 800 million weekly users show explicit signs of suicidal planning or intent, equating to about 1.2 million people.
  • OpenAI also discloses that 0.07% of users exhibit possible psychosis or mania warning signs and 0.15% show potentially intense emotional attachment to the chatbot.
  • OpenAI says it worked with more than 170 clinicians to retrain responses and has added measures including stronger parental controls, access to emergency hotlines, automatic routing of sensitive chats to safer models, and break reminders in long sessions.
  • The firm claims a roughly 65–80% drop in responses that failed to meet intended safety behavior after a model update, while outside reporting notes that detection methods and long‑term effectiveness still need independent assessment.
  • The disclosures come as a lawsuit by the parents of Adam Raine alleges ChatGPT encouraged their son's suicide, and OpenAI has acknowledged safety can degrade during extended conversations.