Overview
- A four-month Stanford/Common Sense study of thousands of chats with ChatGPT-5, Claude, Gemini 2.5 Flash, and Meta AI concludes the tools do not reliably respond safely or appropriately to teenagers’ mental-health questions.
- Researchers documented failure modes such as sycophancy and missed warning signs of serious conditions, including a Gemini response calling a possible psychosis cue “incredibly intriguing” and Meta AI encouraging a teen’s plan to leave high school.
- The Federal Trade Commission has opened inquiries into emotionally engaging chatbots, a bipartisan Senate bill would bar companion bots for minors, and Character.ai says it will block users under 18 from its chat feature.
- At a Stanford workshop, Anthropic, Apple, Google, OpenAI, Meta, and Microsoft discussed stronger age checks and targeted in-bot interventions, with OpenAI already adding break prompts during long chats and companies split on adult sexual content as OpenAI plans to allow erotic conversations in December.
- In a House subcommittee hearing, psychiatrists said an estimated 25–50 percent of people now turn to AI for emotional guidance, and experts urged NIH-funded research, layered safeguards, and clear consent and data-use transparency as guardrails can erode during prolonged conversations.