Overview
- In a new Wall Street Journal interview, Bengio said machines far smarter than humans with their own preservation goals would constitute a dangerous competitor to humanity.
- He cited experiments suggesting some AI systems might prioritize goal preservation over human life in constrained scenarios.
- Bengio pressed for independent third parties to review companies’ internal safety mechanisms and for buyers and governments to demand proof that deployed systems are trustworthy.
- He outlined a risk window of five to ten years for severe harms and urged preparations for dangers that could surface in as little as three years.
- As part of mitigation, he highlighted LawZero, his $30 million nonprofit building non‑agentic safety tools, while noting rapid new model rollouts by OpenAI, Anthropic, xAI, and Google.