Overview
- ChatGPT now identifies signs of emotional or mental distress and directs users to evidence-based support resources.
- High-stakes personal queries prompt non-directive questions that help users reflect rather than receive direct advice.
- The chatbot issues gentle break reminders during prolonged sessions to encourage healthier engagement.
- An advisory group of psychiatrists, paediatricians and HCI specialists guided the design and evaluation of these new guardrails.
- Recent CCDH and NHS studies continue to find harmful outputs from ChatGPT, including detailed suicide notes and drug-use plans.