Overview
- An open letter launched at the UN General Debate in New York urges governments to agree binding global limits on high‑risk AI uses by the end of 2026.
- The appeal is backed by about 200 signatories and roughly 70 organizations, including 10 Nobel laureates and employees of Anthropic, Google DeepMind, Microsoft and OpenAI.
- Notable supporters include Geoffrey Hinton, Wojciech Zaremba and Ian Goodfellow, alongside academic and policy groups leading the effort such as CeSIA, The Future Society and UC Berkeley’s Center for Human‑Compatible AI.
- The proposed red lines target scenarios such as AI control of nuclear arsenals, autonomous weapons, mass surveillance, human impersonation and self‑replication.
- Supporters warn the window for effective intervention is closing and say regional steps like the EU AI Act and a narrow U.S.–China understanding still leave a global enforcement gap.