Overview
- Parents and teens can link accounts to apply teen-specific filters limiting graphic content, sexual or violent roleplay, viral challenges and extreme beauty ideals.
- Guardians can set quiet hours, disable voice and image generation, turn off memory and opt out of having chats used to train models.
- When prompts suggest self-harm, messages are routed to human reviewers who may notify parents by text, email or app alert, with notifications expected within hours.
- Alerts omit chat transcripts and direct quotes to protect privacy, teens can unlink at any time and parents are notified if the link is removed.
- OpenAI says it may contact law enforcement if a teen appears in danger and a parent cannot be reached, and it is testing a safety router for sensitive chats while developing age prediction to apply safeguards automatically.