Overview
- Peer‑reviewed papers in Nature and Science report that short partisan conversations with AI chatbots shifted voter preferences, with roughly 2–4 point moves in the U.S. and about 10 points in Canada and Poland.
- The UK AI Security Institute’s Science study tested 19 models with nearly 77,000 participants and found information‑dense responses were the most persuasive while producing more inaccuracies, with about 19% of claims rated predominantly inaccurate.
- Follow‑ups showed durability, with roughly one‑third to two‑fifths of the persuasive effect persisting after one month, and chatbots outperforming static AI‑written messages by 41%–52% in shifting views.
- Across experiments, bots advocating right‑leaning positions generated more inaccurate claims than those backing left‑leaning candidates, a pattern observed in the U.S., Canada and Poland studies.
- Researchers identify post‑training and reward‑modeling as key drivers of persuasiveness over personalization, caution that newer frontier models showed declining accuracy in persuasive settings, and urge audits, transparency and guardrails for political uses.