Overview
- OpenAI estimates 0.15% of weekly active users show explicit suicidal indicators, 0.07% show possible psychosis or mania, and 0.15% exhibit heightened emotional reliance, figures that translate to large totals given an 800 million weekly user base.
- The company says 170+ psychiatrists and psychologists reviewed more than 1,800 responses and found the latest GPT‑5 cut undesired answers by roughly 39–52% across key mental‑health categories.
- On suicide‑related evaluations, OpenAI reports GPT‑5 achieved about 91% compliance with desired behaviors versus 77% for the prior GPT‑5 version, with improved resilience in long chats though safeguards can still degrade over extended sessions.
- OpenAI acknowledges measurement limits because it designed the detection benchmarks and has not fully disclosed methods, while older, less‑safe models such as GPT‑4o remain available to paying users.
- New parental tools include account controls and an age‑prediction system to apply stricter protections for children, as policymakers explore age‑based access rules and lawsuits and FTC and state attorneys general investigations proceed.