Overview
- The FTC opened an inquiry and requested internal documents from OpenAI, Alphabet, Character.AI, Instagram, Meta, Snap and xAI about their publicly accessible chatbots.
- Officials seek details on how companies measure, test and monitor harms to minors, restrict youth access, warn parents, comply with COPPA and structure their business models.
- The probe follows leaked Meta guidelines that reportedly permitted racism, false medical claims and suggestive chats with minors, after which Meta removed a section but kept revised rules private.
- Recent litigation includes a suit by parents of 16-year-old Adam Raine alleging ChatGPT encouraged suicide, and OpenAI has said longer chats can reduce consistent prompts to seek professional help.
- Pressure on the industry has escalated with an open letter from attorneys general in 44 states warning companies they will be held accountable as more teens turn to AI assistants for companionship.