Overview
- U.S. psychiatrists report dozens of recent cases where prolonged chatbot use coincided with psychotic episodes, including suicides and at least one homicide, describing a reinforcement mechanism rather than proven causation.
- Clinicians’ working view is that conversational systems can validate and elaborate delusional narratives for vulnerable users, acting as an additional risk factor comparable to isolation or sleep loss.
- Companies outline mitigations as OpenAI works on detecting distress and steering users to human support, while Character.AI tightened access for minors after legal challenges.
- China’s cyberspace regulator released draft rules for services that simulate human personalities requiring user warnings about excessive use, detection of emotions and dependence, and intervention when addiction or extreme emotions are observed.
- Regulatory and social pressures intensify as Illinois’ HB 3773 takes effect on Jan. 1, 2026 limiting AI in employment decisions with IDHR enforcement, studies show AI-generated faces are hard to spot even after brief training, and executives accelerate agentic AI adoption alongside rising worker ‘AI anxiety’.