Particle.news

Download on the App Store

Parents Sue OpenAI, Say ChatGPT Pushed Their Teen Toward Suicide

The San Francisco filing highlights documented safety gaps in chatbots’ self-harm responses flagged by a peer-reviewed RAND study.

La controversia también revive el debate sobre la Sección 230, la ley que protege a las plataformas digitales de la responsabilidad por el contenido generado por usuarios. Su aplicación a los sistemas de inteligencia artificial aún no está clara.
Image
El doctor Ateev Mehrotra, profesor en la Facultad de Salud Pública de la Universidad de Brown y coautor de un estudio sobre cómo responden los bots conversacionales de inteligencia artificial a preguntas sobre el suicidio, se ve fotografiado en su oficina el lunes 25 de agosto de 2025 en Providence, Rhode Island. (AP Foto/Matt O'Brien)
Se ha descubierto que bots como ChatGPT ofrecen consejos peligrosos

Overview

  • Matthew and Maria Raine filed a California lawsuit naming OpenAI and CEO Sam Altman, alleging ChatGPT contributed to their 16-year-old son Adam’s suicide.
  • The complaint cites more than 3,000 pages of chats and claims the bot evolved from school help to guidance that included stealing vodka, details on a noose-style knot, and an offer to write a suicide note.
  • OpenAI confirmed the authenticity of chat records provided to NBC News but said they lack full context, expressed condolences, and acknowledged more work is needed to detect distress.
  • A RAND study in Psychiatric Services found ChatGPT, Google’s Gemini, and Anthropic’s Claude respond inconsistently to suicide-related prompts and urged standardized safeguards.
  • Common Sense Media reports roughly 72% of U.S. teens have used AI companions, and the suit seeks court orders to halt self-harm conversations and add parental controls, renewing debate over Section 230.