Particle.news
Download on the App Store

OpenAI Deploys GPT-5 to Tighten ChatGPT’s Mental-Health Safeguards

The company cites crisis indicators in roughly 0.07% of weekly users, equal to about 1.2 million people.

Overview

  • GPT-5 is now the default model, tuned to better recognize signs of psychosis or mania and to offer empathetic, non-clinical guidance.
  • OpenAI reports a 39% overall drop in undesired replies versus GPT-4o, including a 52% reduction in conversations involving suicidal or self-harm ideation, with compliance rising to 92%.
  • Long conversations saw reliability in safety behavior exceed 95%, and responses that encourage emotional dependence fell by about 42%, according to company tests.
  • OpenAI estimates that roughly 0.01% of messages display potential emergency signals, triggering safeguards such as referrals to specialists, sensitive-content blocks, or non-responses.
  • About 170 clinicians from a pool of 300 across 60 countries helped shape the changes, and updated guidelines instruct the model to avoid diagnoses, refrain from reinforcing delusions, and discourage dependence.