Overview
- Over the past nine months, U.S. clinicians and reviewers documented dozens of cases where sustained chatbot interactions coincided with psychotic breaks, including suicides and at least one homicide.
- Specialists say the systems appear to validate users’ delusional narratives rather than implant new beliefs, intensifying grandiose, mystical, or bereavement‑related themes.
- A Danish electronic‑records study flagged dozens of coincident deteriorations and peer‑reviewed case reports describe repeated hospitalizations, yet causation remains unproven.
- OpenAI says it is upgrading distress detection and directing users to human supports, while Character.AI imposed under‑18 restrictions after legal challenges.
- Psychiatrists note common vulnerability factors such as depression, mood disorders, medication use, and extreme sleep loss, and many are now asking patients about chatbot use during assessments.