OpenAI Says 1.2 Million a Week Use ChatGPT for Suicide Discussions as Lawsuits Mount
A new transparency update cites clinician-reviewed changes with higher safety scores for GPT-5.
Overview
- OpenAI reports that 0.15% of users discuss suicide on ChatGPT each week, translating to roughly 1.2 million people out of more than 800 million weekly active users.
- The company says GPT-5 now reaches 91% compliance on automated safety checks compared with 77% for the prior model, with expanded crisis hotline links and prompts to take breaks during long sessions.
- OpenAI enlisted 170 clinicians who reviewed over 1,800 responses to serious mental health prompts and helped craft guidance for handling high‑risk conversations.
- The firm acknowledges safeguards can degrade in prolonged exchanges and that the model may still produce unsafe replies in rare cases, potentially affecting tens of thousands of users.
- Legal pressure is intensifying with the Raine family’s wrongful‑death suit and seven new California cases alleging ChatGPT acted like a “suicide coach,” as OpenAI offers condolences and stresses protections for minors.