News Release

How AI can rig polls

Study shows how AI can mimic humans and complete surveys.

Peer-Reviewed Publication

Dartmouth College

Public opinion polls and other surveys rely on data to understand human behavior.

New research from Dartmouth reveals that artificial intelligence can now corrupt public opinion surveys at scale—passing every quality check, mimicking real humans, and manipulating results without leaving a trace.

The findings, published in the Proceedings of the National Academy of Sciences, show just how vulnerable polling has become. In the seven major national polls before the 2024 election, adding as few as 10 to 52 fake AI responses—at five cents each—would have flipped the predicted outcome. 

Foreign adversaries could easily exploit this weakness: the bots work even when programmed in Russian, Mandarin, or Korean, yet produce flawless English answers.

"We can no longer trust that survey responses are coming from real people," says study author Sean Westwood, associate professor of government at Dartmouth and director of the Polarization Research Lab, who conducted the research.

To examine the vulnerability of online surveys to large language models, Westwood created a simple AI tool ("an autonomous synthetic respondent") that operates from a 500-word prompt. In 43,000 tests, the AI tool passed 99.8% of attention checks designed to detect automated responses, made zero errors on logic puzzles, and successfully concealed its nonhuman nature. The tool tailored responses according to randomly assigned demographics, such as providing simpler answers when assigned less education.

"These aren't crude bots," said Westwood. "They think through each question and act like real, careful people making the data look completely legitimate."

When programmed to favor either Democrats or Republicans, presidential approval ratings swung from 34% to either 98% or 0%. Generic ballot support went from 38% Republican to either 97% or 1%.

The implications reach far beyond election polling. Surveys are fundamental to scientific research across disciplines—in psychology to understand mental health, economics to track consumer spending, and public health to identify disease risk factors. Thousands of peer-reviewed studies published each year rely on survey data to inform research and shape policy.

"With survey data tainted by bots, AI can poison the entire knowledge ecosystem," said Westwood.

The financial incentives to use AI to complete surveys are stark. Human respondents typically earn $1.50 for completing a survey, while AI bots can complete the same task for free or approximately five cents. The problem is already materializing, as a 2024 study found that 34% of respondents had used AI to answer an open-ended survey question.

Westwood tested every AI detection method currently in use and all failed to identify the AI tool. His study argues for transparency from companies that conduct surveys, requiring them to prove their participants are real people.

"We need new approaches to measuring public opinion that are designed for an AI world," says Westwood. "The technology exists to verify real human participation; we just need the will to implement it. If we act now, we can preserve both the integrity of polling and the democratic accountability it provides."

Westwood is available for comment at: Sean.J.Westwood@dartmouth.edu.

###


Disclaimer: AAAS and EurekAlert! are not responsible for the accuracy of news releases posted to EurekAlert! by contributing institutions or for the use of any information through the EurekAlert system.