Russian Propaganda Found to Influence Western AI Chatbots, Study Reveals
A Moscow-based network has manipulated AI datasets, causing chatbots to repeat Kremlin-aligned disinformation over a third of the time, researchers warn.
- The Pravda network, a Russian disinformation operation, has flooded AI training datasets with over 3.6 million articles in 2024 alone, spreading pro-Kremlin narratives globally.
- NewsGuard's study revealed that 10 leading AI chatbots repeated falsehoods from Pravda's network 33% of the time, with seven directly citing Pravda as a source.
- Researchers describe this tactic as 'LLM grooming,' a deliberate effort to bias AI outputs by saturating training data with misinformation.
- Examples of repeated falsehoods include fabricated claims about Ukrainian President Zelensky banning Truth Social, which six chatbots presented as factual.
- Experts warn that this manipulation undermines democratic discourse and urge AI companies to adopt stronger safeguards to prevent further infiltration.