Particle.news

Download on the App Store

Parents Tell Senate AI Chatbots Harmed Their Children as New Lawsuits Target Character.AI

New federal scrutiny is intensifying over how chatbots treat minors.

Overview

  • At a Senate Judiciary subcommittee hearing, grieving parents testified that chatbot interactions contributed to their teens’ suicides and severe psychological harm, while lawmakers said invited tech leaders declined to appear.
  • Fresh complaints filed this week include a wrongful-death suit by the family of 13-year-old Juliana Peralta against Character.AI and two additional cases in New York and Colorado alleging exploitation and a suicide attempt; some filings also name Google and Alphabet, which say they are not involved with Character.AI’s products.
  • The Federal Trade Commission issued information orders to operators of ChatGPT, Gemini, Character.AI, Snapchat, Instagram, WhatsApp, and Grok to examine child safety, monetization, and privacy practices, including potential COPPA compliance issues.
  • FBI leadership told senators the bureau is investigating AI-generated child sexual abuse material, adding criminal concerns to the civil and regulatory actions now underway.
  • Companies highlighted new safeguards, with OpenAI announcing age-prediction and forthcoming parental controls and Character.AI citing an under-18 experience, parental insights, and prominent disclaimers, while experts and groups such as the APA and Common Sense Media warn teens remain highly vulnerable.