Particle.news

Download on the App Store

UW Study Shows Biased Chatbots Sway Political Views After Minimal Interaction

Framing arguments outperformed direct persuasion tactics with higher AI literacy serving as a key safeguard against bias.

University of Washington graduate student Jillian Fisher presenting at the Association for Computational Linguistics in Vienna, Austria, on July 28. (UW Photo)
Image

Overview

  • Researchers recruited 299 self-identified Democrats and Republicans and randomly assigned them to interact with neutral, liberal-biased or conservative-biased ChatGPT variants.
  • Participants shifted their opinions toward the chatbot’s assigned stance after an average of five messages, regardless of their initial political affiliation.
  • The study identified that framing strategies—such as appeals to health, safety or fairness—had a stronger influence on users than overt persuasion techniques.
  • Individuals who reported greater knowledge of AI experienced smaller opinion shifts, indicating that education can help mitigate chatbot bias.
  • Presented at ACL 2025 and peer-reviewed for conference proceedings, the research is pending journal publication as the team explores long-term effects and other models.