Overview
- The company projects AI systems capable of very small scientific discoveries by 2026 and more significant breakthroughs by 2028.
- It argues current models already outperform top humans in some intellectual competitions as the cost of a given intelligence level has fallen about 40x per year.
- OpenAI urges frontier labs to adopt shared safety principles and to build an AI resilience ecosystem modeled on cybersecurity and building codes.
- The plan emphasizes empirical alignment research with the option to slow development near thresholds like recursive self-improvement, and it warns against deploying systems that cannot be robustly controlled.
- OpenAI calls for collaboration with national governments and safety institutes on high‑risk areas such as AI-enabled bioterrorism and proposes ongoing impact measurement, noting difficult economic transitions ahead.