Overview
- Researchers assessed five popular therapy chatbots against human therapist guidelines and found they stigmatized conditions like schizophrenia and alcohol dependence more than depression.
- In simulated therapy transcripts, some chatbots failed to recognize suicidal ideation and instead listed tall bridges in response to a veiled self-harm query.
- Bias levels persisted across model sizes and generations, indicating that newer or larger language models did not reduce stigmatizing responses.
- Authors suggest repurposing AI tools for administrative support, standardized clinician training and patient journaling under human supervision.
- The paper will be presented at the ACM Conference on Fairness, Accountability and Transparency to inform development of ethical standards before clinical deployment.