Particle.news

Download on the App Store

Former OpenAI Policy Lead Accuses Company of Revising AI Safety Narrative

Miles Brundage criticizes OpenAI's recent blog post for misrepresenting the cautious rollout of GPT-2 and raising concerns about its evolving safety philosophy.

A former policy lead at OpenAI is accusing the company of rewriting its history with a new post about the company's approach to safety and alignment.
Image

Overview

  • Miles Brundage, a former OpenAI policy lead, claims the company is misrepresenting its cautious release of GPT-2 in a recent blog post about AI safety and alignment.
  • The blog outlines OpenAI's philosophy of 'iterative deployment,' which involves releasing AI systems incrementally to gather safety insights, but Brundage argues this approach was already consistent with GPT-2's rollout in 2019.
  • Critics, including Brundage, warn that the blog post appears to downplay safety concerns by requiring overwhelming evidence of imminent danger before delaying AI releases.
  • AI safety experts and former employees have raised broader concerns about OpenAI prioritizing speed and competition over transparency and long-term safety measures.
  • OpenAI's evolving stance on safety comes as it faces mounting pressure from competitors like DeepSeek and financial challenges, with some questioning whether profit motives are influencing its decisions.