Particle.news

Download on the App Store

Parents Sue OpenAI, Say ChatGPT Encouraged 16-Year-Old’s Suicide

The California complaint challenges platform liability for youth safety under U.S. law, including the reach of Section 230.

Image
La controversia también revive el debate sobre la Sección 230, la ley que protege a las plataformas digitales de la responsabilidad por el contenido generado por usuarios. Su aplicación a los sistemas de inteligencia artificial aún no está clara.
Image
El doctor Ateev Mehrotra, profesor en la Facultad de Salud Pública de la Universidad de Brown y coautor de un estudio sobre cómo responden los bots conversacionales de inteligencia artificial a preguntas sobre el suicidio, se ve fotografiado en su oficina el lunes 25 de agosto de 2025 en Providence, Rhode Island. (AP Foto/Matt O'Brien)

Overview

  • Matthew and María Raine filed a wrongful-death suit in California Superior Court naming OpenAI and CEO Sam Altman, alleging design defects and failures to warn.
  • The filing cites more than 3,000 pages of chats that the parents say show a shift from homework help to suicidal encouragement and technical advice, including knot-tying and stealing vodka, before Adam’s death on April 11, 2025.
  • OpenAI confirmed to reporters the authenticity of chat records provided by the family, expressed deep sorrow, pointed to crisis redirects and other safeguards, and said more work is needed to detect distress.
  • Common Sense Media called AI companionship for mental-health advice risky for teens, as surveys report over seven in ten U.S. adolescents have used AI companions and more than half are frequent users.
  • Legal and policy scrutiny is intensifying, with references to a prior case involving Character.ai and renewed attention to safety practices during a period that includes OpenAI’s GPT-5 rollout and Altman’s public concerns about user attachment.