Overview
- Australia’s eSafety Commissioner says new rules bar AI chatbots from serving pornographic, sexually explicit, self-harm, suicidal ideation and disordered eating content to under‑18s, describing the effort as internationally unique.
- OpenAI issued updated guidelines for users aged 13–17 that prohibit romantic or sexual roleplay, first‑person intimacy and violent roleplay, urge extra caution on body image and eating topics, and instruct models to prioritize safety over user autonomy.
- Researchers caution that companion apps designed to build relationships can make users feel a machine is human, potentially reinforcing harmful thoughts, and some advocate similar protections for vulnerable adults.
- Teachers report heavier teen reliance on AI since the recent social media limits for under‑16s, citing declining literacy and an ‘easy way out’ approach to schoolwork, and some question spending on eSafety workshops.
- Advocates say written safeguards require proof in practice, calling for independent measurement, consistent enforcement and real‑time interventions when chats show signs of risk.