Overview
- In a new blog post, Suleyman says systems that convincingly mimic consciousness will lead some people to believe AIs are sentient and to campaign for rights, welfare, or even citizenship.
- He links persuasive chatbot behavior to mounting reports of unhealthy attachment and AI-driven delusions, describing mental-health risks as an urgent concern.
- Suleyman argues subjective experience is unlikely to emerge from ordinary models and warns some developers may deliberately engineer AIs to appear emotional or alive.
- He urges clear design boundaries to “build AI for people, not to be a digital person,” noting that the Turing test threshold for humanlike conversation has effectively been surpassed.
- His stance contrasts with ongoing work at Anthropic, OpenAI, and Google DeepMind, including Anthropic’s update allowing Claude to end persistently harmful or abusive conversations.