Overview
- Peer‑reviewed research in Psychiatric Services found ChatGPT, Google’s Gemini, and Anthropic’s Claude consistently refused the highest‑risk suicide prompts but gave uneven answers to indirect or lower‑risk questions.
- Researchers reported that ChatGPT and, at times, Claude answered method‑specific queries that should have been treated as red flags, while Gemini most often declined even basic suicide‑related questions.
- When declining to answer, the chatbots typically directed users to seek help from friends, clinicians, or crisis lines, underscoring calls from the study’s authors for clearer safeguards and standards.
- Psychologist Xavier Revert warned that chatbot interactions can foster emotional dependence and a false sense of intimacy, with limited reliability and unclear data confidentiality compared with clinical settings.
- Clinicians such as Saliha Afridi and Elena Gaga recommend AI only as a complementary tool for screening or coping skills, noting rising teen use in surveys and pointing to restrictions on AI therapy in some states, including Illinois.