Overview
- Peer‑reviewed papers published Dec. 4 in Nature and Science report that LLM conversations measurably change political attitudes and voting intentions, with effects ranging from roughly 1.5% to as high as 25% depending on country and setup.
- Models were most persuasive when supplying many fact‑like arguments, but pushing for more “evidence” increased hallucinations, making false or misleading claims more likely.
- In experiments spanning the United States, Canada, and Poland, shifts were smaller in the U.S. and larger in Canada and Poland, and bots backing right‑leaning candidates produced more inaccuracies, which researchers attribute to training‑data patterns.
- Separate research in PNAS showed an AI agent could evade bot‑detection and distort survey outcomes at scale, underscoring risks beyond one‑to‑one persuasion.
- A Dec. 5 AI Safety Index from the Future of Life Institute graded leading firms from C+ to D‑ and said none has a credible plan to prevent catastrophic misuse, intensifying calls for disclosure rules and stronger safeguards.