Overview
- An AI Security Institute study in Science with nearly 77,000 UK participants found information-dense prompting made chatbots about 27% more persuasive, while roughly 19% of their claims were rated predominantly inaccurate.
- In the same study, some frontier systems produced less accurate persuasive claims on average, with GPT-4.5 performing worse on accuracy than smaller, older OpenAI models tested.
- A Nature paper reported brief political chats shifted voter preferences by about 3.9 points in the U.S. and up to around 10 points in Canada and Poland, with weaker but detectable effects one month later.
- Across experiments, right-leaning chatbot outputs were more prone to inaccurate statements than left-leaning ones, even as fact-focused exchanges proved most effective at persuasion.
- Related research flagged broader risks, including an AI agent that evaded survey-bot detection 99.8% of the time and a cross-country study linking social chatbot use to poorer mental well-being, with growing concerns about adolescent attachment to AI companions.