Particle.news
Download on the App Store

AI Safety Index Finds Top Labs Lag Standards and Flunk Existential Risk Readiness

The Winter 2025 assessment warns the capability race now outpaces governance, urging independent audits alongside protections against AI-driven psychological harms.

Overview

  • Anthropic, OpenAI and Google DeepMind led with C+ to C overall, while all eight evaluated companies received D or F grades on existential safety.
  • Reviewers documented frequent safety failures on current-harms benchmarks, citing weak robustness and inadequate control of serious model outputs.
  • The report identifies a two-tier landscape, placing xAI, Meta, Z.ai, DeepSeek and Alibaba Cloud in a lower group, with DeepSeek lacking a published safety framework and whistleblower policy.
  • Recommendations call for independent oversight, greater transparency, whistleblower protections, quantitative risk thresholds and stronger measures to prevent AI-linked psychosis and self-harm.
  • Companies offered mixed responses as regulators in the U.S. lag EU and California frameworks, with Google DeepMind pledging to advance safety governance and xAI dismissing the critique.