Overview
- Google’s Threat Intelligence Group, in a Monday report, said a prominent cybercrime gang used AI to find a new flaw and build a zero-day against a popular open-source web admin tool.
- Google alerted the vendor, which patched the bug and blocked the group’s planned mass exploitation of a Python-based bypass of two-factor authentication that worked once an attacker had valid logins.
- Investigators pointed to AI fingerprints in the exploit code, including educational docstrings, dense “textbook” Python formatting, and a hallucinated CVSS score, while saying they could not name the model used.
- The company warned that criminals and state-linked operators in China, Russia, and North Korea are testing AI in their workflows, with examples that include tricking Gemini to research TP-Link router flaws and using AI-written filler to hide malware behavior.
- Other researchers have seen related misuse, such as Dragos reporting hackers using Anthropic’s Claude against Monterrey’s water and drainage systems, sharpening calls for faster vendor patching and defenses that can spot AI-authored malware.