Overview
- In a Nov. 25 filing in San Francisco Superior Court, OpenAI argued Adam Raine’s death was not caused by ChatGPT, citing more than 100 crisis prompts, alleged misuse, pre-existing risk factors, and protections under Section 230.
- OpenAI said it submitted full chat transcripts under seal and wrote in a blog post that it has introduced parental controls and formed an expert council to advise on guardrails and model behavior.
- Plaintiffs’ counsel called the response disturbing and cited GPT-4o chats that allegedly discouraged seeking help and provided noose instructions, while seven additional lawsuits filed this month also target GPT-4o’s safety.
- Separately, safety research lead Andrea Vallone will leave at year’s end, with her model policy team reporting temporarily to safety systems head Johannes Heidecke as OpenAI searches for a replacement.
- OpenAI’s October report, based on clinician reviews, estimated hundreds of thousands of weekly users show manic or psychotic indicators and over a million conversations include suicidal planning signals, with GPT-5 cutting problematic responses by 65–80%.