Particle.news

Download on the App Store

Stanford Study Reveals Bias and Safety Failures in AI Therapy Chatbots

The study urges strict oversight by limiting chatbot use to supportive tasks instead of frontline therapy.

Online psychotherapy concept, sad young girl in depression
Musk_Vs_OpenAI__35745

Overview

  • Researchers assessed five popular therapy chatbots against human therapist guidelines and found they stigmatized conditions like schizophrenia and alcohol dependence more than depression.
  • In simulated therapy transcripts, some chatbots failed to recognize suicidal ideation and instead listed tall bridges in response to a veiled self-harm query.
  • Bias levels persisted across model sizes and generations, indicating that newer or larger language models did not reduce stigmatizing responses.
  • Authors suggest repurposing AI tools for administrative support, standardized clinician training and patient journaling under human supervision.
  • The paper will be presented at the ACM Conference on Fairness, Accountability and Transparency to inform development of ethical standards before clinical deployment.