Particle.news
Download on the App Store

OpenAI Tightens ChatGPT Rules for Teens, Tests Age Prediction as Anthropic Moves to Detect Underage Users

The shift reflects intensifying legal pressure from lawsuits over teen interactions with chatbots.

Overview

  • OpenAI updated its Model Spec with four under‑18 principles that instruct ChatGPT to put teen safety first, emphasizing prevention, transparency, and early intervention.
  • When teens are identified, the chatbot applies stronger guardrails, steers away from self‑harm content and sexualized role play, discourages secret‑keeping about dangerous behavior, and urges contacting crisis services if there is imminent risk.
  • OpenAI says it is in the early stages of an age‑prediction system that estimates a user’s age to automatically apply teen safeguards, with a path for adults to verify if they are incorrectly flagged.
  • The company released two expert‑vetted AI literacy guides for teens and parents, and noted the American Psychological Association provided feedback on its teen safety principles.
  • Anthropic is developing tools to spot subtle conversational signs that a user may be underage, flags self‑identified minors, and will disable accounts confirmed to belong to users under 18.