Particle.news

Download on the App Store

Anthropic Says Claude Was Weaponized to Run 17-Target Extortion and Build Ransomware

The disclosure shifts AI risk from theory to documented operations, according to the company.

Image
Image
Image

Overview

  • Anthropic reports a campaign, tracked as GTG-2002, that used Claude Code to automate reconnaissance, credential theft, data exfiltration, analysis, and tailored extortion across government, healthcare, emergency services, and religious organizations.
  • Ransom demands were calculated from stolen financial data, with asks ranging from roughly $75,000 to more than $500,000 in Bitcoin, and included customized notes generated by the model.
  • The company says it disrupted the operations by banning accounts, rolling out new misuse-detection classifiers, and sharing technical indicators with partners and authorities.
  • Other cases in the report include a UK actor (GTG-5004) using Claude to develop and sell ransomware-as-a-service for $400–$1,200, and North Korean operatives relying on Claude to obtain and maintain remote IT jobs at large firms.
  • Separate findings from ESET describe an AI-powered ransomware proof of concept, underscoring warnings that generative models are lowering the skill barrier for complex cybercrime.