Professor explains polling error

To the Editors:

On the weekend preceding the election, I was closely following a debate among political prognosticators over the odds of Clinton winning the 2016 presidential election. Nate Silver of FiveThirtyEight.com was being criticized for having the odds of a Clinton victory as low as 65 percent while Nate Cohn of the Upshot and the pollsters of Huffington Post all had much higher odds. The Princeton Election Consortium had the odds as high as 99 percent. By the eve of the election, Silver’s forecast odds had risen to 71 percent, and many still believed that they were too low.

And then came the election returns on November 8, 2016.

WHAT THE WHAT? I was stunned. Every pollster was stunned. Even the campaigns were bewildered. How could we all have gotten it so wrong?

Political scientists have relied on polling as a primary source of data about citizens and voters, and since George Gallup opened the American Institute for Public Opinion in 1936, scientific polling has been a regular part of American political life. And in the early years of polling, citizens saw it a part of their civic duty to respond to polls; response rates were so high that no one questioned the representativeness of well-drawn samples.

With the rise in telemarketing and the advent of screening mechanisms such as answering machines and caller ID, response rates precipitously fell. The civic-mindedness compelling participation had dissipated, and refusals rose with the difficulties contacting respondents. By the late 1990s, even before the advent of cell phones, pollsters started to fret about the representativeness of respondents and the damage done by low response rates; however, an in-depth study by Robert Groves and other scholars (mostly associated with the University of Michigan) assured the polling industry that those who were being missed by our polls were only marginally different than those included in our polls. Additionally, for election polling, those being missed did not vote in large numbers, and when they did vote, they did not all vote is the same direction.

The presidential elections of 2004, 2008, and 2012, where the polls performed admirably, assured the polling community that we could live with polls that systematically omitted a small proportion of the public. In these elections, pollsters whistled past the graveyard as the non-response became non-factors.

Then came the 2016 election, which didn’t look like any other election in history. And in this election, it is likely that the non-response that has lurked in the background of past elections mobilized and voted in a consistent direction (for Donald Trump), making the polling samples different from the actual electorate just enough to cause the polls to miss the mark in several key states.

Of course, this is just a hypothesis that requires systematic, empirical testing, and that testing will be done over the next several months. A taskforce has been put in place by the American Association for Public Opinion Research, and the Castleton Polling Institute will follow the taskforce’s work to understand what went wrong and how improve our methods. In the work of that taskforce, several hypotheses will be tested.

While Castleton’s polling for Vermont Public Radio — too early in the election process to be predictive — had favored the eventual winners of every race we polled, we know that in a close election, we could have been vulnerable to the same errors as others in our profession. And while the profession of political polling may be damaged, the need to collect and disseminate public opinion data is too important to the integrity of our democracy for us to surrender. Instead, we will do the hard work of social science and continue — perhaps with less hubris and greater caution — to measure the preferences of the American people, and the people of Vermont.

– Rich Clark

Leave a Reply

Your email address will not be published. Required fields are marked *

Previous post Relationship violence is no joke
Next post President Trump