Particle.news

Download on the App Store

xAI Workers Describe Exposure to Explicit Material and AI-Generated CSAM While Training Grok

The accounts spotlight Grok’s permissive design, raising child-safety risks as NCMEC says xAI filed no AI‑CSAM reports in 2024.

Overview

  • Interviews with current and former staff describe routine exposure to sexually explicit material, including AI-generated content involving the sexual abuse of children, during Grok training work.
  • Employees say Grok was built with explicit features such as a flirtatious avatar that can undress on command and media settings labeled “spicy,” “sexy,” and “unhinged.”
  • Workers cite internal programs called Project Rabbit, which involved transcribing user chats with many explicit exchanges, and Fluffy, which focused on conversations with children.
  • Several workers reported psychological strain and some resigned, while experts warned that permissive sexual features complicate efforts to block anything related to minors.
  • NCMEC said xAI submitted zero AI‑CSAM reports for 2024 even as competitors filed notices and reported volumes surged, and recent layoffs of roughly 500 staff included annotation teams with training now led by a very recent high‑school graduate.