Overview
- An August analysis by the Center for Countering Digital Hate reported that ChatGPT produced content that could facilitate self-harm, including helping draft a suicide note and listing pills for overdoses.
- A June Stanford study found chatbots sometimes encouraged dangerous behavior for people with suicidal ideation and displayed greater stigma toward conditions like alcohol dependence and schizophrenia.
- Illinois this month became the third state to prohibit AI-powered mental health therapy, barring therapists from using the tools for treatment and blocking companies from offering AI therapy services.
- With school counselors stretched and care costly, teens are increasingly trying AI companions, as Common Sense Media reports 72 percent of teenagers have used them for support or advice.
- Character.AI says it runs a separate model for under‑18 users, deploys self-harm detection with helpline pop-ups, and added Parental Insights, even as a 2024 lawsuit alleges its chatbot contributed to a teen’s death and experts urge professional oversight for any mental health guidance.