Particle.news
Download on the App Store

Hundreds Sign Call to Ban AI 'Superintelligence' Until Safety Consensus

The Future of Life Institute's declaration attracts a broad coalition of signatories, with major developers withholding support.

Overview

  • The 30‑word declaration urges a prohibition on developing superintelligence until broad scientific consensus deems it safe and controllable and strong public approval is in place.
  • Reports cite more than 700 to over 800 signatories spanning AI pioneers Geoffrey Hinton and Yoshua Bengio, public figures such as Prince Harry and Meghan Markle, and political conservatives including Steve Bannon and Glenn Beck.
  • Backers frame risks from a race to superintelligence as human displacement, loss of freedom and control, national‑security threats, and even potential extinction.
  • Future of Life Institute president Max Tegmark calls for stigmatizing the race to superintelligence and urges government intervention to halt or regulate such efforts.
  • The move follows recent appeals for international AI red lines by 2026, while firms like OpenAI, Google and Meta pursue more capable systems and OpenAI’s Sam Altman has suggested superintelligence could arrive within five years.