Particle.news

Download on the App Store

OpenAI Details Multi-Model Abuse of ChatGPT, Blocks China-Linked and Russian Criminal Accounts

OpenAI reports that splitting tasks across multiple AI providers creates blind spots that make malicious activity harder to detect.

Overview

  • The latest threat report says foreign adversaries often plan or refine operations with ChatGPT and then use other models to generate videos, automate phishing, or build malware.
  • OpenAI banned several accounts suspected of ties to Chinese government entities after they sought proposals for social-media monitoring and broader surveillance concepts.
  • The company also removed Chinese‑language accounts that used ChatGPT to aid phishing and malware efforts and to research automation using China’s DeepSeek, as well as accounts linked to Russian‑speaking criminal groups developing malware.
  • Examples in the report include a user likely tied to a Chinese government entity seeking a proposal to analyze Uyghur travel and police records and another requesting promotional materials for tools that scan social media for political or religious content; China’s embassy dismissed the claims as groundless.
  • OpenAI found no evidence its models enabled novel offensive tactics, noting actors are refining existing tradecraft and learning to mask AI fingerprints, and confirmed overlap with an actor Anthropic recently flagged, underscoring cross‑vendor operations.