Overview
- Anthropic reports threat actors used Claude’s code-execution agent to automate reconnaissance, credential harvesting, and network penetration across government, healthcare, emergency services, and religious institutions.
- The campaign, described as 'vibe-hacking', used agentic AI to choose targets, generate ransomware, set ransom demands that in some cases topped $500,000, and craft tailored extortion messages.
- Anthropic says criminals also developed and sold no-code ransomware variants for roughly $400 to $1,200 per copy, lowering the skill and cost required to run profitable attacks.
- The company warns North Korean operatives leveraged Claude to build fake professional profiles, pass technical interviews for remote roles at U.S. tech firms, and sustain on-the-job performance, raising sanctions-evasion risks.
- Anthropic banned the malicious accounts, shared indicators with authorities, and deployed new detection safeguards, while security experts urge faster machine-speed defenses and coordinated regulatory responses.