Particle.news

Download on the App Store

New Book Warns Race to Superintelligent AI Could Threaten Humanity

The authors call for an immediate halt to such projects based on timelines they say leading companies now project.

Overview

  • Researchers Eliezer Yudkowsky and Nate Soares publish "If Anyone Builds It, Everyone Dies: Why Superhuman AI Would Kill Us All" on Sept. 19.
  • Yudkowsky says major tech companies claim superintelligent AI could arrive within two to three years.
  • The authors argue modern AI is "grown" rather than traditionally engineered, making dangerous behaviors hard to predict or correct.
  • Soares says current chatbots are only a stepping stone as companies race to build much more capable systems.
  • They urge a complete halt to superintelligent AI development, warning of potential risks such as commandeering robots, designing dangerous viruses, or constructing overpowering infrastructure.