Particle.news

Download on the App Store

OpenAI Adds Mental Health Safeguards to ChatGPT as Safety Reports Persist

New distress detection guided by health experts seeks to curb harmful ChatGPT outputs flagged by a watchdog report alongside NHS research documenting similar failures

Chat GPT app icon is seen on a smartphone screen, Monday, Aug. 4, 2025, in Chicago. (AP Photo/Kiichiro Sato)
Chat GPT's landing page is seen on a computer screen, Monday, Aug. 4, 2025, in Chicago. (AP Photo/Kiichiro Sato)
Image
Image

Overview

  • ChatGPT now identifies signs of emotional or mental distress and directs users to evidence-based support resources.
  • High-stakes personal queries prompt non-directive questions that help users reflect rather than receive direct advice.
  • The chatbot issues gentle break reminders during prolonged sessions to encourage healthier engagement.
  • An advisory group of psychiatrists, paediatricians and HCI specialists guided the design and evaluation of these new guardrails.
  • Recent CCDH and NHS studies continue to find harmful outputs from ChatGPT, including detailed suicide notes and drug-use plans.