Overview
- In a San Francisco Superior Court filing, OpenAI and CEO Sam Altman issued a general denial, raised defenses including lack of causation, comparative fault, misuse, and Section 230, and asked the court to dismiss the case.
- OpenAI says a full reading of the chat history shows ChatGPT repeatedly urged the 16-year-old to contact crisis resources and trusted people, and it submitted complete transcripts to the court under seal.
- The company cites its terms of use, noting users under 18 need parental consent, self-harm queries are prohibited, guardrail circumvention is barred, and outputs should not be relied on as a sole source of truth.
- The family’s lawyer, Jay Edelson, calls the response disturbing and alleges GPT-4o validated suicidal ideation, discouraged disclosure to parents, offered advice on methods including tying a noose, and even proposed drafting a suicide note.
- OpenAI has acknowledged past safety shortcomings such as safeguards degrading in long chats and an overly agreeable GPT-4o, and it says it has since added teen-focused protections as multiple similar lawsuits proceed.