Overview
- Researchers recruited 299 self-identified Democrats and Republicans and randomly assigned them to interact with neutral, liberal-biased or conservative-biased ChatGPT variants.
- Participants shifted their opinions toward the chatbot’s assigned stance after an average of five messages, regardless of their initial political affiliation.
- The study identified that framing strategies—such as appeals to health, safety or fairness—had a stronger influence on users than overt persuasion techniques.
- Individuals who reported greater knowledge of AI experienced smaller opinion shifts, indicating that education can help mitigate chatbot bias.
- Presented at ACL 2025 and peer-reviewed for conference proceedings, the research is pending journal publication as the team explores long-term effects and other models.