Overview
- No company exceeded a “weak” rating in SaferAI’s July 17 assessment of risk management protocols, with Anthropic (35%) and OpenAI (33%) ranking highest.
- Every firm earned a D or lower on the Future of Life Institute’s existential safety index, and overall FLI grades ranged from C+ for Anthropic to D for xAI and Meta.
- Since the inaugural October 2024 survey, Anthropic and Google DeepMind saw declines in risk scores while xAI rose from 0% to 18% and Meta improved from 14% to 22%.
- None of the developers has presented a coherent, actionable plan to control future AGI threats despite public ambitions to reach human-level AI within the decade.
- Google DeepMind and other companies have formally disputed the scope of the studies, arguing key internal safety measures were excluded from the public evaluations.