Key AI Safety Researchers Exit OpenAI Over Concerns About Company Direction
Departures highlight dissatisfaction with OpenAI's shifting priorities, including reduced focus on safety and governance in AGI development.
- Multiple senior AI safety researchers, including Rosie Campbell and Miles Brundage, have resigned from OpenAI, citing concerns about the company's evolving priorities and internal culture.
- Rosie Campbell, who led OpenAI's Policy Frontiers team, stated she believes she can better pursue AI safety work outside the organization following the dissolution of the AGI Readiness team.
- Miles Brundage, previously a Senior Advisor for AGI Readiness, left in October, expressing doubts about OpenAI's commitment to making AI as safe and beneficial as possible.
- Former Superalignment co-lead Jan Leike also departed OpenAI earlier this year, criticizing the company for prioritizing product development over safety processes and joining rival AI firm Anthropic.
- OpenAI's recent restructuring and increased commercialization have raised concerns about whether the organization remains aligned with its original mission of ensuring safe and beneficial AGI for humanity.