Former OpenAI Policy Lead Criticizes Company’s Shift in AI Safety Narrative
Miles Brundage accuses OpenAI of revising its history on cautious AI deployment and warns of risks in its current approach.
- Miles Brundage, a former OpenAI policy researcher, has criticized the company for allegedly rewriting its history on AI safety in a newly published document.
- The document outlines OpenAI’s iterative approach to AI model deployment, citing GPT-2 as an example of cautious rollout, a claim Brundage disputes as inaccurate.
- Brundage argues that OpenAI's portrayal of GPT-2’s release as excessively cautious misrepresents the incremental and responsible strategy used at the time.
- He warns that OpenAI’s current philosophy could dismiss legitimate safety concerns by requiring overwhelming evidence of imminent danger before taking action.
- The criticism comes as OpenAI faces increased scrutiny over prioritizing rapid product releases and competitiveness over long-term safety considerations.