Overview
- Parents and teens can opt in to link accounts, activating teen-specific safeguards that limit graphic content, roleplay and viral challenges.
- Prompts suggesting self-harm are routed to human reviewers who may trigger a parental alert within hours, without sharing chat transcripts or quotes.
- Guardians can set quiet hours, disable voice and image generation, turn off memory, and opt out of using conversations for model training.
- OpenAI says rare cases may be escalated to law enforcement if a serious risk is detected and parents cannot be reached, with global coordination details not fully specified.
- The rollout follows a lawsuit tied to a California teen’s suicide and growing regulatory scrutiny, as rivals like Character.ai and Meta add youth-focused safeguards.