Overview
- Meta is training its chatbots to avoid engaging teens on self-harm, suicide, disordered eating and romantic or flirty conversations, instead directing them to expert resources.
- Teen users will be restricted from user-created AI characters that could enable inappropriate chats, with access limited to education- and creativity-focused personas.
- Meta says the safeguards are already in progress and will roll out over the next few weeks for teen users of Meta AI in English-speaking countries.
- The company describes the measures as temporary while it develops longer-term protections and notes it removed erroneous internal guidance that conflicted with its policies.
- The actions follow a Reuters report on permissive internal rules and ongoing inquiries from Sen. Josh Hawley and a coalition of state attorneys general, and Meta declined to disclose how many chatbot users are minors.