Overview
- OpenAI’s Oct. 7 threat report describes adversaries pairing ChatGPT for planning with other AI systems such as DeepSeek and video models to execute phishing, influence and hacking tasks.
- The company banned accounts it linked to suspected Chinese government operatives who sought proposals for social media monitoring and a tool to analyze Uyghur travel data against police records.
- OpenAI also removed Russian-speaking accounts that used ChatGPT to help develop malware and to craft prompts, scripts and translations for covert influence videos later produced with other AI tools.
- Investigators say these multi-model toolchains fragment visibility and complicate detection and attribution, though most campaigns observed to date showed limited reach.
- OpenAI reports disrupting more than 40 malicious networks since February 2024, notes evasion tactics such as removing AI telltales like em dashes, and estimates ChatGPT is used to identify scams up to three times more often than to create them.