Political polling is a snapshot in time. What do the ever-changing results really tell us? Do we in the news media utilize them well?
And after polls were so off in 2016, and again in 2020, how much faith should you place in them this fall?
Polls are not thermometers, and they are not predictive even if the news media often treat poll results as a real-time scoreboard. A myriad of factors contributes to a good poll. Among them, a large sample size, of likely voters, conducted by a trusted source. The best pollsters have standards they follow.
On a recent episode of the WUNC Politics Podcast, we broke down, explained and explored the state of political polling in North Carolina with two experts:
- Scott Keeter, the Senior Survey Analyst with the Pew Research Center.
- David McLennan, a Political Science Professor at Meredith College and Director of the Meredith Poll.
The excerpts below have been edited for length and clarity.
First, why were the polls so off in 2016 and 2020?
Scott Keeter, Senior Survey Analyst with the Pew Research Center: “If we had a very concise answer to that question, we’d all be in a better place in terms of the polling field. In the aftermath of 2016, we did a number of post-mortem studies, sort of autopsies, if you will, and identified a number of ways in which polling could be improved.
One of which was very important – the changing nature of political coalitions in America. People with lower levels of education tended to be voting more Republican than they had in the past, at least among the white non-Hispanic population. And a lot of pollsters were not properly adjusting their samples to account for the proper number of lower educated white, non-Hispanic people in their samples. And so polling actually improved considerably from 2016 to 2020.
But that was not enough to solve the problem of the under representation of Republican supporters. And so, in 2020, the polling errors were actually larger than they were in 2016. That would seem to suggest that we’re seeing something that we’ve actually never seen before in polling, and that is that supporters of one political party, the Republican Party, are slightly less likely than supporters of the Democratic Party to cooperate with polling.”
Why is that the case?
Keeter: “I think the standard assumption on the part of most observers is that the Republicans right now are less trusting of major institutions, which include news organizations that sponsor a lot of polling. We aren’t able to determine that for certain, but it may stand to reason as Donald Trump repeatedly said while he was president that you shouldn’t trust the polls.”
Even though Trump is not on the ballot, no one is quite sure of the ultimate impact from his looming presence. A similar uncertainty can be applied over the issue of abortion.
Will it mobilize one political base over the other, and will it truly motivate people to cast ballots who would not have otherwise voted?
Keeter says another good approach to consuming polls wisely is to look at the collective body of work.
Keeter: “If you have the ability to compare a poll that comes across your desk or in your inbox, with other polling that’s been done in the same race in a recent timeframe, how does it stack up? It doesn’t mean that a new poll is wrong if it doesn’t fit with the average, because it could be that things are changing in the race, or it could be that this is really a much better poll than the polls that are in the average. But I find the averages to be very helpful.”
Pollsters use historical turnout as a factor in projecting upcoming participation. While Roy Cooper and Donald Trump both won in North Carolina in 2020, neither did so by margins as wide as the polls had shown weeks earlier.
Which brings us to projecting turnout in North Carolina this fall. Consider, during the 2014 midterm, 44% of registered voters cast ballots. In 2018, 53% of registered voters turned out. So it would be reasonable — on its face — to estimate turnout this fall in the mid-50s.
But what if 60-to-62% of voters fill-in bubbles this year?
Keeter: “That difference could matter a lot. But it depends on whose people are turning out. So I think one of the one of the factors about Trump’s unexpected successes – unexpected from the point of view of the polling community or from other observers – was that he turned out people who were not habitual voters, and did not appear likely to vote. And I think he did that in 2016, and again in 2020.”
How confident can citizens be in polls in 2022, after what happened in 2020, or what happened in 2016?
David McLennan: “One response I would give, and you probably get this from every pollster you talk to is, people forget that in 2018, the polls were almost spot on. And I know this sounds defensive, but we that when President Trump is on the ballot, things get thrown out of whack. And we have not as a polling industry figured out how to get Trump supporters to answer questions, and to answer them, honestly.
“So, I’m going to use the same techniques I used in 2018. For both the horse-race type of polling, but also for the issue polling. And think that if we can get the sampling correct, which is predicting what percentage of voters are going to actually participate in the fall election, I think we’ll have a pretty good set of results.”
How do you get that sampling correct?
McLennan: “So, what I’m looking at for 2022, based on surveying I’ve done about enthusiasm, is it’s going to look more like 2018 than, say, 2006. But it’s actually going to perhaps be 2018 plus some, because all the indications that no group is going to stay at home. So we’re trying to model – using maybe 2018 as a starting point – what 2022 might be.
“I hate to say it, because people probably will throw things, but it’s an educated guess.”
At the risk of sounding overly cynical, skepticism around polls is healthy and even necessary in 2022.
Last week the New York Times reported that some of the same warning signs from 2016 and 2020 are percolating.