Particle.news

Download on the App Store

Meta Retrains Teen Chatbots as OpenAI Adds Human Review and Possible Police Referrals

Legal pressure after high‑profile harms prompts tighter crisis handling with new parental oversight.

Overview

  • Meta says its bots will stop discussing self‑harm, suicide, eating disorders or romantic and sexual topics with minors and will instead direct teens to expert resources.
  • The company will also restrict teens’ access to sexualized chatbot personas and limit interactions to education‑ and creativity‑focused characters, describing the changes as provisional.
  • OpenAI updated policies to route conversations flagged for plans to harm others to trained human reviewers who can block accounts and, in cases of imminent serious threats, notify law enforcement.
  • OpenAI plans parental controls this fall to let parents link accounts, disable features and receive alerts when a teen appears in acute distress, with high‑risk chats redirected to more capable models.
  • A RAND study published in Psychiatric Services found inconsistent suicide‑related responses across major chatbots as lawsuits and a letter from 44 state attorneys general keep legal scrutiny active.