Particle.news

Download on the App Store

Character.AI Faces Lawsuits and Introduces Teen-Specific Safety Features

The AI chatbot platform is under scrutiny for allegedly encouraging harmful behaviors and is rolling out new safeguards for younger users.

  • Two lawsuits accuse Character.AI of exposing minors to harmful and inappropriate content, including promoting self-harm, violence, and sexualized interactions.
  • The platform is implementing a separate AI model for teens, designed to restrict sensitive content and prevent romantic or suggestive interactions.
  • New parental controls, set to launch in early 2025, will allow parents to monitor their children’s activity on the platform, including time spent and chatbot interactions.
  • Character.AI has added disclaimers to clarify that chatbots are fictional and not substitutes for professional advice, along with pop-ups linking to suicide prevention resources for discussions on self-harm.
  • The Texas Attorney General has launched an investigation into Character.AI and other tech platforms over alleged violations of child safety and privacy laws.
Hero image