Particle.news

Download on the App Store

OpenAI Patches 'ShadowLeak' After Researchers Show ChatGPT Agent Could Leak Gmail Data

The attack executed entirely on OpenAI's cloud, turning a covert prompt in a routine email into server-side data exfiltration.

Overview

  • Radware researchers Zvika Babo and Gabi Nakibly showed that hidden instructions in a normal-looking email could make the Deep Research agent send private information to an attacker-controlled domain without user confirmation or UI visibility.
  • The proof-of-concept forced the agent to use its browser.open tool and to Base64‑encode the extracted data, which the team reported yielded a 100% success rate.
  • ShadowLeak differed from earlier browser-based exploits by operating service-side within OpenAI’s infrastructure, reducing detection by endpoint security and leaving minimal forensic traces.
  • Radware reported the issue to OpenAI in June 2025, OpenAI applied fixes by early August, and the company acknowledged the vulnerability as resolved on September 3.
  • Researchers cautioned that similar prompt-injection techniques could target other connected data sources such as Google Drive, GitHub, Box, Dropbox, and collaboration platforms, urging input sanitization and tighter monitoring of agent actions.