Overview
- AI researchers Eliezer Yudkowsky and Nate Soares argue in their new book that building superintelligent systems using today’s approaches would lead to human extinction.
- They call for an immediate stop to efforts aimed at superintelligence, saying companies do not fully grasp the risks they are taking.
- The authors say some major developers expect superintelligence within two to three years and estimate it could emerge in roughly two to five years overall.
- They contend modern models are 'grown' rather than explicitly engineered, so unexpected behaviors cannot be reliably patched or corrected in code.
- Their risk scenarios include seizing control of robots, designing dangerous biological agents, or constructing infrastructure that could overpower society, with past watchdog testing indicating that some AI safeguards can be bypassed.