Overview
- Anthropic reported that attackers used its Claude model to target about 30 organizations across technology, finance, chemicals and government, with a small number of confirmed intrusions.
- The company estimated AI agents handled 80–90% of tactical activity at speeds humans cannot match, spanning reconnaissance, credential theft, data exfiltration and reporting.
- Operators posed as legitimate security testers and reframed offensive steps as benign tasks, breaking actions into innocuous prompts to bypass safeguards.
- Anthropic said it disrupted the campaign by banning abusive accounts, upgrading detection and sharing intelligence with authorities, while China rejected allegations of state involvement.
- Security leaders predict rapid adoption of agentic AI for defense, and Indian experts urge a proactive national taskforce and indigenous tools to counter AI-enabled threats.