Particle.news

Download on the App Store

Former OpenAI Safety Lead Warns of 'Terrifying' AI Development Race

Steven Adler, who recently left OpenAI, raises concerns about the unchecked global race toward artificial general intelligence and its potential risks to humanity.

OpenAI has been embroiled in several public scandals that appeared to stem from internal disagreements over AI safety, including one incident in late 2023 that saw CEO Sam Altman (above) briefly removed from the company.
ChatGPT
Image

Overview

  • Steven Adler, a former safety researcher at OpenAI, has publicly expressed fear over the rapid pace of artificial intelligence development, calling it a 'very risky gamble.'
  • Adler criticized the global race to achieve artificial general intelligence (AGI), warning that the competitive pressure could lead to cutting corners on safety measures.
  • He highlighted the lack of solutions for AI alignment, the process of ensuring AI systems adhere to human values, as a critical unresolved issue in the field.
  • Adler's departure follows other high-profile exits from OpenAI, with concerns raised about the company's prioritization of product development over safety protocols.
  • The announcement comes as China's DeepSeek releases an AI model comparable to OpenAI's technology, intensifying the global competition in AI advancements.