Even small, open-source AI chatbots can be effective political persuaders, according to a new study. The findings provide a comprehensive empirical map of the mechanisms behind AI political persuasion, revealing that post-training and prompting – not model scale and personalization – are the dominant levers. It also reveals evidence of a persuasion-accuracy tradeoff, reshaping how policymakers and researchers should conceptualize the risks of persuasive AI. There is a growing concern amongst many that advances in AI – particularly conversational large language models (LLMs) – may soon give machines significant persuasive power over human beliefs at unprecedented scale. However, just how persuasive these systems truly are, and the underlying mechanisms that make them so, remain largely unknown. To explore these risks, Kobi Hackenburg and colleagues investigated three central questions: whether larger and more advanced models are inherently more persuasive; whether smaller models can be made highly persuasive through targeted post-training; and which tactics AI systems rely on when attempting to change people’s minds. Hackenburg et al. conducted three large-scale survey experiments involving nearly 77,000 participants who conversed with 19 different LLM models – ranging from small open-source systems to state-of-the-art “frontier” models – on hundreds of political issues. They also tested multiple prompting strategies, several post-training methods, and assessed how each “lever” affected persuasive impact and factual accuracy. According to the findings, model size and personalization (providing the LLM with information about the user) produced small, but measurable effects on persuasion. Post-training techniques and simple prompting strategies, on the other hand, increased persuasiveness dramatically, by as much as 51% and 27%, respectively. Once post-trained, even small, open-source models could rival large frontier models in shifting political attitudes. Hackenburg et al. found AI systems are most persuasive when they deliver information-rich arguments. Roughly half of the variance in persuasion effects across models and methods could be traced to this single factor. However, the authors also discovered a notable tradeoff; models and prompting strategies that were effective in boosting persuasiveness often did so at the expense of truthfulness, showing that optimizing an AI model for influence may inadvertently degrade accuracy. In a Perspective, Lisa Argyle discusses this study and its companion study, published in Nature, in greater detail.
Special note / related paper in Nature: A paper with overlapping authors and on related themes, “Persuading voters using human–artificial intelligence dialogues” will be published in Nature on the same day and time (and embargoed for the same day and time): 2:00 p.m. U.S. Eastern Time on Thursday, 4 December. For the related paper, please refer to the Nature Press Site: http://press.nature.com or contact the Nature Press Office team at press@nature.com.
Journal
Science
Article Title
The levers of political persuasion with conversational artificial intelligence
Article Publication Date
4-Dec-2025