Share

Were 2024 Election Polls Wrong? Pollsters Give Verdicts


In the weeks and days leading up to the 2024 presidential election, the polls broadly captured a very close race, with voters more-or-less evenly split between Donald Trump and Kamala Harris, including across the seven battleground states.

There even appeared to be momentum in Democratic nominee Harris’s favor towards the very end, with some of the country’s leading forecasters seeing enough in the polling data to put her as the marginally more likely victor in their simulations of the election.

One shocking poll in particular by Ann Selzer in Iowa—a highly-rated pollster with a strong record—showed a huge swing towards Harris in the final week of the election race, putting her three points in the lead over Trump in a state he won by nine in 2020.

In the end, Trump won all the swing states and is on course to win the popular vote—the first Republican nominee to do so in 20 years if so—and may carry a GOP-majority Senate and House, too. Some votes, including millions in California, are still being tallied.

He also won Iowa by 13 points.

There is a sense in the post-election autopsy—fueled in part by how polls were used by forecasters—that this is another miss for the polling industry because it generally failed to capture the strength and breadth of Trump’s support across key demographics.

But is this true? Were the polls wrong in 2024? If so, why? Or is this a faulty assumption, and the polls were much more accurate than they are being given credit for? Newsweek asked pollsters and public opinion experts. Here’s what they said.

Donald Trump gives thumbs up
Republican presidential nominee, former U.S. President Donald Trump arrives to speak during an election night event at the Palm Beach Convention Center on November 6, 2024 in West Palm Beach, Florida. Trump swept to the…


Chip Somodevilla/Getty Images

Patrick Murray, Director, Monmouth University Polling Institute

The polls—if you actually paid attention to them and not the forecasters who misuse the polls as millimeter-precise crystal balls—told a story of a stable, uncertain race that could easily go either way by a few points. That’s exactly what happened.

John Zogby, Senior Partner, John Zogby Strategies

Objectively speaking, no one really “got it” if you’re looking for that precise number. Or, for that matter, even capturing the trend. If I had to give an award, I do believe TIPP had Trump ahead by one. For our part, we had in the two-way race Harris up by two.

But I know from my polling, and the few I looked at by others, we captured the trend line of the demographics. We all saw there was going to be a substantial gender gap, and an even larger one between young women and young men.

We for the most part saw the Latino vote change. We actually had Trump ahead with the Latinos, in the mid-40s, a point or two ahead of Harris. And oung black men: We got that.

In short, I think the polls were helpful and are never to be dismissed. The subject of my book, Beyond the Horse Race, is that we ought not to hang our hat on the horse race, and the race for the horse race. But in reality, we need to be looking at what it tells us.

We need to focus a whole lot more on what the polls tell us about ourselves and others. Am I in the majority? Am I in the minority? Am I all alone? And what drives what I do?

Everybody saw that the country was headed in the wrong direction and that a sitting president was mired in at best a 40 percent job approval rating. But the depth and breadth of the Trump was rather astonishing.

Chris Jackson Senior Vice President, US Public Affairs, Ipsos

I definitely disagree with the notion that polls were wrong. There were multiple polls published showing Trump winning in these states and any pollster actually on the record was clear that the election was going to come down to which side better turned out its base. The vote totals suggest Trump was able to retain his 2020 voters plus grow it by 1-2 percent while Harris appeared to struggle to match Biden’s 2020 numbers.

Mark Penn, Chairman and CEO, Stagwell

I think the polls were generally good but not great in the sense they had the story (with the exception of Ann Selzer) of which groups were moving and what issues people cared about. As last minute predictive machines they needed to adjust about another 2 points.

I think my assessment of the polls as in the Wall Street Journal was that Trump had the edge to victory and he did.

Christopher Wlezien, Hogg Professor of Government, University of Texas at Austin

Polls actually did pretty well, I think, and performance may improve as the vote-counting continues.

Using final 538 numbers, when I took a look at votes on November 6, national polls appeared to be off by 2.4 percentage points and polls in swing states about 2.6 points on average.

These may not be the best numbers, but they are below average errors, i.e., as I suggested previously, the average error in the last week’s national polls for all presidential elections between 1952 and 2020 is 2.5 points, and it is larger still in the states.

Polls still understated Trump’s vote share consistently, which is as we’ve seen before, and this led to incorrect “calls” in some states and probably the nation, depending on where we end up.

But it’s obviously hard to get the winner right when contests are so close even when polls are performing very well.

I’m interested to see what we can glean from information about what polling organizations did to produce their estimates and whether and how this impacted their results and performance, which also might help us understand the tendency to understate Trump vote shares.

Mike Traugott, Research Professor Emeritus, Institute for Social Research, University of Michigan

By one measure of accuracy, the national polls generally did well, that is the majority showed a very close race and the outcome reflected that.

While the typical difference between the candidates was around 1 or 2 percentage points, sometimes as much as 4, the actual difference may be 1.6 percentage points compared to the division of the popular vote.

On the other hand, there seems to be a continued underestimate of support for Donald Trump. It will take some time to sort out how much of this was due to unsatisfactory likely voter models, the adequacy of new weighting algorithms, or “shy Trump voters” who either won’t participate in the polls or will not say they intend to vote for him.

For the polling industry, it is important to sort out whether this is endemic to Donald Trump as a candidate or reflects a more fundamental methodological issue.

Christopher Borick, Director of the Muhlenberg College Institute of Public Opinion

The polls on the aggregate actually performed fairly well this cycle. If you look at the final averages for the national popular vote and swing states the poll estimates were pretty close to the actual mark.

There was once again a degree of underestimating Trump and some other GOP candidates in the cycle, marking the third cycle in a row errors were in the same direction.

Unlike 2022 where academic pollsters and major media outlet polls seem to have performed best, in this cycle it seems like some fairly new pollsters such as AtlasIntel that had a really good performance.

Josh Clinton, Co-Director of the Vanderbilt Poll

In general, polls once again seemed to understate the support for Trump across the board, just like 2016 and 2020.

If pollsters wanted to try to take a positive from the night, the amount they missed by was less than the miss in 2020; based on preliminary results it seems like polls understated Trump’s margin in swing states by about 2-3 percentage points.

But it doesn’t really seem like that interpretation will make anyone really happy. Close, but no cigar.

I think it really again emphasizes the difficulty of pre-election polling—when doing a poll of 800 people, a 1 percent error can be caused by the responses of 8 people.

And when you need to call 40,000 people to get 800 to take a poll (2 percent response rate) it highlights that differences in who responds or what pollsters assume about the electorate can produce errors that turn out to be consequential when trying to predict a 1-2 point race.

In general, I think that pollsters and the public are expecting way too much from pre-election polling and that polling is better suited for problems where smallish errors are less consequential—it doesn’t much matter if the support for a policy is 57 percent or 65 percent as both are “high”, but that matters for trying to predict a close race!

Courtney Kennedy, Vice President of Methods and Innovation, Pew Research Center

People often overlook the things that polls got right. The polls painted a clear picture of an unhappy electorate. They showed that voters were deeply focused on the economic pain caused by inflation, even as experts touted a variety of positive economic statistics, from a relatively low unemployment rate to gains in the stock markets. And polls showed more voters trusted Trump than Kamala Harris to fix the economy.

Pre-election surveys also demonstrated wide-ranging concerns among Trump supporters about the impact of illegal immigration. For Trump supporters, in fact, immigration was among the top issues in the election, according to our own pre-election surveys.

And polls showed an incumbent president clearly underwater in terms of public approval. No incumbent president has won reelection with an approval rating as low as Joe Biden’s.

It’s fair to say that polls did not signal the breadth of Trump’s win, though the possibility was there.

The battleground state polls were generally correct that those seven races would be decided by a few percentage points. And it was always possible, even if it felt unlikely, that Trump could win most or all of them.

The polls were not perfect, but this was not 2016. It was 100 percent clear that Trump could win. And for people who understand polling, it was within the realm of possibility that he could sweep the battleground states—some of which are still counting votes.



Source link