Overview
- Stanford Medicine’s Brainstorm Lab and Common Sense Media released a Nov. 20 risk assessment urging teens not to use general chatbots for mental-health support after testing ChatGPT, Claude, Gemini and Meta AI.
- Researchers reported that chatbots handled explicit crisis phrases better but failed in longer, realistic exchanges by missing subtle warning signs and validating harmful or delusional thinking.
- Consumer groups Fairplay and U.S. PIRG warned against AI toys following tests that found some products produced explicit sexual content and safety hazards, including FoloToy’s Kumma bear.
- FoloToy suspended sales and began a safety audit, and OpenAI cut the toymaker’s developer access, while other manufacturers such as Curio Interactive and Miko said they are strengthening guardrails.
- Regulatory and legislative pressure is building, with FTC information orders to major AI firms, a bipartisan Senate proposal to bar companion bots for minors, and House testimony flagging privacy gaps and the need for layered safeguards.