Overview
- Google’s Threat Intelligence Group, which disclosed its findings Monday, said it spotted a zero-day exploit likely built with AI and warned the vendor, derailing a planned mass attack.
- The exploit targeted a popular open-source web administration tool and let intruders bypass two-factor login after obtaining valid credentials because a hardcoded trust assumption created a logic flaw.
- Analysts found AI fingerprints in the Python code, including tutorial-style docstrings, a fabricated CVSS score and textbook formatting, and said the model was not Google’s Gemini or Anthropic Mythos.
- Google called the case the tip of a larger shift as AI speeds bug discovery and lowers the skill needed for campaigns, raising pressure on vendors and IT teams to detect AI-written code and patch faster.
- The report also describes AI-aided tactics now in use, including decoy code that hides malware (CANFAIL, LONGSTREAM), an Android backdoor that uses Gemini’s API to tap screens (PROMPTSPY), and March supply-chain hits on LiteLLM-linked builds that leaked cloud keys.