Particle.news
Download on the App Store

OpenAI Warns of 'Potentially Catastrophic' Superintelligence, Maps 2026–2028 Breakthroughs

The company pushes for common lab standards with government coordination to manage fast-rising capabilities.

Overview

  • OpenAI says future superintelligent systems could carry risks it characterizes as potentially catastrophic.
  • The company forecasts AI that can make very small scientific discoveries by 2026 and more significant breakthroughs by 2028.
  • OpenAI reports current models already beat top humans in some difficult competitions, with costs falling about 40x per year and task scope expanding from seconds to hours and soon days or weeks.
  • It urges that no superintelligent system be deployed without robust alignment and control, calling for more empirical safety research and even a possible slowdown to study self-improving systems.
  • OpenAI proposes shared safety principles among frontier labs, an AI resilience ecosystem modeled on cybersecurity, and close coordination with national governments on threats such as bioterrorism, while highlighting benefits in health, materials, drug development, climate modeling, and education.