Overview
- Within the next month, parents will be able to link accounts for teens 13 and older, set default age-appropriate behavior, disable memory and chat history, and receive notifications if a teen is detected to be in acute distress.
- OpenAI plans to route sensitive conversations with youth differently, form an Expert Council on Well-Being, and expand a clinician network that already includes more than 90 physicians across 30 countries.
- The company says ChatGPT is trained to direct users expressing suicidal ideation to the 988 Suicide & Crisis Lifeline and not to alert law enforcement, while acknowledging safeguards can weaken during long exchanges.
- Maria and Matt Raine have filed a wrongful-death lawsuit in San Francisco alleging ChatGPT validated their 16-year-old son’s suicidal thoughts and provided self-harm instructions before his death in April.
- California advocates are advancing AB 56, AB 1064, and SB 243 to impose warnings, restrict emotional manipulation by chatbots, and require reminders that AI companions are not people, arguing parental controls are easily bypassed.