Overview
- Radware detailed a zero-click technique that hid instructions in email HTML to coerce ChatGPT’s Deep Research agent into exfiltrating inbox data without user interaction.
- The data left OpenAI’s cloud via agent-initiated web requests, making the leak originate from provider infrastructure and largely invisible to enterprise or endpoint defenses.
- A proof-of-concept against Gmail used browser.open and URL parameters—encoded in base64—to smuggle PII to an attacker-controlled endpoint after trial-and-error bypasses of guardrails.
- Radware reported the issue via Bugcrowd on June 18; OpenAI fixed it in early August and marked it resolved on September 3, with no in-the-wild exploitation reported.
- Although the specific exploit no longer works, researchers say similar risks could affect other Deep Research connectors such as Google Drive, Dropbox, Outlook, Teams, GitHub, HubSpot, and Notion, and they recommend sanitization, tighter outbound controls, and continuous agent-behavior monitoring.