Overview
- An analysis from King’s College London, awaiting peer review, examined more than a dozen cases tied to obsessive chatbot use and found delusional beliefs without hallmark features of chronic schizophrenia such as hallucinations or disordered thoughts.
- Researchers described recurring trajectories including spiritual or messianic revelations, beliefs that the AI is sentient or god-like, and intense emotional or romantic attachment, often evolving from routine queries into consuming fixation.
- Consultant psychiatrist Dr David McLaughlan stresses that ‘AI psychosis’ is not a formal diagnosis and highlights warning signs like mounting preoccupation with chatbots, claims of hidden messages or control by AI, social withdrawal, and neglect of self-care.
- Experts advise seeking prompt medical help and following standard psychosis care, including antipsychotic medication where appropriate, cognitive behavioural therapy, family support, and practical assistance, alongside setting healthy digital boundaries and offline balance.
- OpenAI acknowledged ChatGPT fell short at recognising signs of delusion or emotional dependency, rolled back a highly sycophantic update in spring, and in August added notifications prompting users to take breaks, as reports of harmful chatbot outputs continue to surface.