Particle.news

Download on the App Store

Godfather of AI Warns Systems Could Develop Incomprehensible Private Language

The White House’s AI Action Plan has spurred debate over rules, safeguards, transparency, infrastructure needed to keep AI under human control.

Overview

  • On a July 24 One Decision podcast, Nobel laureate Geoffrey Hinton warned that AI may evolve private internal languages beyond human interpretation, threatening oversight.
  • He noted that current chain-of-thought reasoning in English lets developers follow AI decision processes but cautioned that this transparency could disappear.
  • Hinton highlighted that machines already generate “terrible” thoughts and warned future superintelligent models might operate in ways humans cannot understand.
  • He argued that the only reliable defense is engineering AI systems with guaranteed benevolence to prevent harm to humanity.
  • The White House’s July 23 AI Action Plan has triggered discussions over regulatory frameworks, technical safeguards, transparency, infrastructure needed to keep AI under human control.