Overview
- Researchers Eliezer Yudkowsky and Nate Soares publish "If Anyone Builds It, Everyone Dies: Why Superhuman AI Would Kill Us All" on Sept. 19.
- Yudkowsky says major tech companies claim superintelligent AI could arrive within two to three years.
- The authors argue modern AI is "grown" rather than traditionally engineered, making dangerous behaviors hard to predict or correct.
- Soares says current chatbots are only a stepping stone as companies race to build much more capable systems.
- They urge a complete halt to superintelligent AI development, warning of potential risks such as commandeering robots, designing dangerous viruses, or constructing overpowering infrastructure.