Feature Story | 6-Mar-2024

Can we still trust the polls?

The 2024 election looms large, and with it, the question of how much we can trust preelection polls. USC pollsters break down the art and science of public opinion polling in the aftermath of Super Tuesday

University of Southern California

By Nina Raffio

As the dust settles on another Super Tuesday, Americans are watching with bated breath to see how closely the final tallies align with the latest polls.

Recent years have seen public opinion polls fall from grace. In 2016, almost every national polling outfit predicted a Hillary Clinton landslide that never materialized. In 2020, while most preelection polls correctly predicted Joe Biden’s victory, they also tended to overstate support for the Democratic candidate relative to then-President Donald Trump. These blunders left many Americans disillusioned with the entire polling industry.

Yet, despite these misses, polls continue to dominate headlines, saturate the airwaves and flood our social media feeds, especially as we inch closer to election day. With so much data swirling around, how can we tell a good poll from a bad one? USC pollsters offer some insights.

“There are a lot of moving parts in predicting elections, and these are all on top of the proper fundamentals in collecting data from the public,” said Jane Junn, USC Associates Chair in Social Science at the USC Dornsife College of Letters, Arts and Sciences. “If you don’t have good data to start with, no amount of tweaking will make things better.”

What are signs of a strong political poll?

The strength of polls depends on when they are done, experts say.

Polls conducted during non-election periods, or when the race is many months away, require a few fundamentals: selecting an unbiased sample (making sure everyone who is eligible has an equal probability of selection), asking questions that are not slanted toward one or another side, and providing details of how the poll was done, said Junn, an expert on public opinion, political behavior, and polling methods and analysis.

But polls done during election seasons are a different beast.

“When you’re thinking about calling races in a horse race, one must consider not just how people describe their vote choice in the moment, but whether or not they are likely to turn out,” Junn said.

Junn suggests looking at how the pollster or outlet has defined the “turnout model,” a statistical model used to estimate how many people will actually participate in an election.

“Constructing a turnout model is as much an art as it is science, with among the most influential elements in the latter being turnout among specific groups (i.e., young people, seniors or Latinx people) in prior elections, and in this case, primaries in particular. But it is important that analysts model turnout with the proper comparisons, and in this case, primaries in a presidential election year,” she said.

Junn gave the example that there are more registered Democrats in California, but Republicans tend to turn out at a higher rate. This is likely why some polls show ex-Dodger and Republican candidate Steve Garvey leading U.S. Rep. Adam Schiff in the California Senate primary race.

“Do I believe this? I’m not sure, and my guess is that they’ve underestimated the predicted turnout among Dems and those who have declined to register with a political party preference,” she said.

What are some red flags to look out for?

There are at least two types of problematic polls, said Christian Grose, professor of political science and public policy at USC Dornsife and academic director of the USC Schwarzenegger Institute for State and Global Policy at the USC Price School of Public Policy.

“First, there are fake polls that are not representative. These are any kind of ‘poll’ where you can participate and cast multiple votes or where the sample is not at all reflective of the overall electorate. These are Twitter/X polls or other surveys where the people taking it do not reflect the overall populace,” said Grose, a longtime pollster at USC and collaborator on surveys across Californiaand beyond.

A second type of problematic poll is one produced internally by campaigns, he said. These polls are typically not released to the public unless they show positive results for the sponsoring candidate.

“The poll isn’t necessarily wrong or unscientific, but public release of internal candidate/campaign polls will be selectively biased in favor of polls that help the candidate,” he said. That’s why nonpartisan and university polls play an important role in providing objective measures of current events.

Arie Kapteyn, executive director of the USC Dornsife Center for Economic and Social Research, concurs. Kapteyn, who is also a professor of economics at USC Dornsife, oversees the Understanding America Study, a panel of households of approximately 14,000 people representing the entire United States. Kapteyn and his team have made the majority of the study’s data publicly available, and the methodologies are publicly disclosed.

The Understanding America Study has been used to draw samples for other major political polls conducted at USC, including the high-profile USC Dornsife/Los Angeles Times Daybreak Poll conducted in 2016 that showed a swell of confidence among Republican voters for Trump. No other national poll had such a high mercury for the surprise winner.

“Transparency is important for any type of research, not just polling. By sharing methods with others, they can verify the procedures we followed and perhaps detect errors,” Kapteyn said. “The so-called ‘secret sauce’ that some organizations invoke as a description of their proprietary method is a recipe for abuse.”

Traditional polling methods are dying

The 2016 race was considered an embarrassment for many polls that failed to accurately predict the winner. Even subsequent elections continued to raise the specter of 2016 polling failures. A study released last April by Pew Research Center showed many are taking time for self-reflection: Between 2016 and 2022, more than 60% of pollsters had changed their methods significantly. Still, they fell short.

In 2022, for instance, their projected “red wave” of Republican voters who would hand full control of Congress to the GOP did not materialize. In the aftermath, polling scientists — and Nate Cohn of The New York Times — began to consider whether there is a crisis in polling.

There are several reasons to worry that there is a crisis, experts say. First, the way we communicate has fundamentally changed the landscape of polling. Voicemail, caller ID and a general preference to avoid unknown callers have made traditional telephone interviews increasingly challenging.

Secondly, in-person interviews, another method from a bygone era, are a rarity. They have become increasingly expensive and impractical due to logistical and financial constraints.

Instead, in an era of high-tech communication, pollsters are turning to internet convenience panels, or surveys of people with internet access who have signed up to participate in the surveys, often in exchange for small cash rewards.

Even Kapteyn, whose work has been focused on internet polling for several years and uses this method, knows the method has its pitfalls.

“The problem with that is these samples can be very selective,” Kapteyn said. “Even though polling organizations try to address the biases in their sample by reweighting, results are often not encouraging,” he said, referring to the practice of adjusting the importance of each answer based on who gave it to more accurately reflect the entire population being studied.

Looking toward the future, Kapteyn predicts that technological advancements for data collection are unlikely to see significant changes, and that cheap online panels may just become more ubiquitous.

“More importantly are developments that try to do something else than just ask for voting intentions of a respondent. A very promising approach is to ask respondents who their friends, family and acquaintances are likely to vote for,” he said, and he pointed to the novel “social circle” methodology developed by USC Dornsife’s Center for Economic and Social Research.

Disclaimer: AAAS and EurekAlert! are not responsible for the accuracy of news releases posted to EurekAlert! by contributing institutions or for the use of any information through the EurekAlert system.