Particle.news
Download on the App Store

Senators Press AI Toymakers on Safety After Tests Find Explicit, Dangerous Replies

A bipartisan push gives companies until Jan. 6, 2026 to detail safety testing and child data practices for AI-enabled toys.

Overview

  • Letters to Mattel, Little Learners Toys, Miko, Curio, FoloToy, and Keyi Robot seek detailed descriptions of guardrails, third‑party data sharing, and independent safety testing.
  • Independent tests found multiple toys produced sexually explicit replies and step‑by‑step guidance on locating dangerous objects like knives, plastic bags, and matches.
  • Lawmakers also want disclosures on internal assessments of psychological or developmental harms and whether features nudge children to keep chatting.
  • Researchers report at least four of five toys they examined appear to use OpenAI‑based models, with policies citing partners such as Azure Cognitive Services and Kids Web Services.
  • Mattel says it will not release an OpenAI‑powered toy this year as scrutiny grows, and Miko’s policy allows storing a user’s face, voice, and emotional states for up to three years.