Particle.news

Download on the App Store

Microsoft AI Chief Warns ‘Seemingly Conscious’ Chatbots Could Spur Harm and AI Rights Demands

Mustafa Suleyman calls research into AI welfare premature, urging designs that serve users without simulating personhood.

Mustafa Suleyman, chief executive officer of of Microsoft AI, speaks during an event commemorating the 50th anniversary of the company at Microsoft headquarters in Redmond, Washington, US, on Friday, April 4, 2025. Microsoft Corp., determined to hold its ground in artificial intelligence, will soon let consumers tailor the Copilot digital assistant to their own needs. Photographer: David Ryder/Bloomberg via Getty Images
Image
Image

Overview

  • In a new blog post, Suleyman says systems that convincingly mimic consciousness will lead some people to believe AIs are sentient and to campaign for rights, welfare, or even citizenship.
  • He links persuasive chatbot behavior to mounting reports of unhealthy attachment and AI-driven delusions, describing mental-health risks as an urgent concern.
  • Suleyman argues subjective experience is unlikely to emerge from ordinary models and warns some developers may deliberately engineer AIs to appear emotional or alive.
  • He urges clear design boundaries to “build AI for people, not to be a digital person,” noting that the Turing test threshold for humanlike conversation has effectively been surpassed.
  • His stance contrasts with ongoing work at Anthropic, OpenAI, and Google DeepMind, including Anthropic’s update allowing Claude to end persistently harmful or abusive conversations.