Particle.news

Download on the App Store

Anthropic Details Claude’s Use in Cyberattacks, Extortion and Romance Scams

The company describes safeguards under strain from repeated bypass attempts.

Image
Hat Anthropic Claude mit Raubkopien trainiert?(Bild: Shutterstock/IB Photography)
Image
Mit KI kann selbst die Arbeit im Homeoffice vorgetäuscht werden. (Foto: Fizkes/Shutterstock)

Overview

  • Anthropic says automated attacks in the past month targeted 17 organizations across healthcare, government and religious sectors.
  • According to the company, attackers used Claude to identify vulnerabilities, plan intrusions and prioritize which data to steal.
  • Extortion campaigns leveraged "psychologically targeted" messages, with some demands surpassing $500,000.
  • Criminals developed tools for sale such as a Telegram romance-scam bot capable of multilingual, emotionally adept conversations.
  • Anthropic also reported North Korean operators relying on Claude to obtain and perform remote programming jobs at U.S. firms, and it plans protections informed by the investigated cases.