Particle.news
Download on the App Store

PromptPwnd Exposes CI/CD AI Agents to Secret Theft as Google Patches Gemini CLI

Researchers warn that LLMs can mistake repository text for commands, enabling workflow manipulation.

Overview

  • Aikido Security detailed a prompt-injection flaw that targets AI agents embedded in GitHub Actions and GitLab CI/CD by turning untrusted text such as issues, commit messages and pull requests into actionable instructions.
  • The researchers said the pattern affects major AI coding tools, naming Google Gemini, Anthropic’s Claude Code, OpenAI Codex and GitHub’s AI Inference tool as susceptible when granted elevated repository privileges.
  • Successful exploits can exfiltrate secrets like privileged GitHub tokens, execute shell commands, and modify or publish repository content due to the agents’ broad permissions in many workflows.
  • Aikido reported a proof of concept to Google that triggered a disclosure process, and Google fixed the issue in the Gemini CLI within days while Aikido published Opengrep detection rules and is coordinating with additional organizations.
  • Aikido said the attack chain is practical and already present in real-world workflows, confirming exposure at least at five Fortune 500 companies and urging teams to restrict AI agent privileges and avoid feeding untrusted input into prompts.