Overview
- Google’s Threat Intelligence Group, which published its report Monday, said a prominent criminal group prepared a Python-based exploit for an unnamed open-source admin tool and that a vendor patch blocked a planned mass campaign.
- The exploit targeted two-factor authentication by abusing a high-level logic error where the software trusted a hardcoded assumption, allowing access once attackers held valid credentials.
- Researchers found telltale signs of large language model output in the script, including verbose instructional docstrings, a fabricated CVSS score, and clean, textbook Python structure.
- Google said this is its first evidence that attackers used AI to both find and weaponize a zero-day, and it reported strong interest by China- and North Korea-linked groups in applying AI to vulnerability research.
- Investigators ruled out Google’s Gemini and Anthropic’s Mythos as the source model, and they urged organizations to apply the patch and review 2FA flows since AI is shrinking the time defenders have to fix such flaws.