Image courtesy of Elias Bizannes via Flickr.
Spanning time and geography, pre-election polling has been a vital method used to measure voter opinion.
Whenever an election looms hazy in our midst, polls begin to dominate the news cycle. They give us an idea which candidates are leading in various locations, among various demographics, and which are not doing so hot.
The results of pre-election polls, however, are just as often surprising as they are on point, and are sometimes proven spectacularly wrong come election time.
In light of past failures and the prominence of polls far and wide, it’s worth digging into the particulars to see how polls arrive at their statistics, and how accurate a representation they truly are.
Pre-election polling: How it works
According to Pew, the leading think tank in the world of polls, pre-election polling typically works through the following steps:
- Identifying likely voters (based on likely voter criteria)
- Reaching out to a random sample (through landline and cellphone numbers)
- Determining voter preference (questions determining voters’ political leaning)
- Gauging accuracy (do the predictions match the results?)
Pew has a pretty good record in terms of predictions, but early polls are rarely a good indication of what’s to come, Pew notes, and should instead be taken as “snapshots in time.”
But even in best practice, many say polling is a system in crisis. Here’s why:
- Growth of cell phones: Pollsters today reach out via landlines and cellphones, but landlines are still over-represented, with cellphones difficult to reach
- Declining to poll: The response rate to polls is abysmal, at about 8 percent in 2014 as opposed to 80 in the 1970s
- High costs: The cost to commission a quality poll runs into five figures due to the low response rate; many organizations can’t afford to prioritize this within their budgets
Essentially, drawing a random sampler is much more difficult than ever before.
As one example, a poll that suggested Donald Trump would beat Hillary Clinton, Bernie Sanders, and Joe Biden, when analyzed, was revealed to have used a tiny sample of mostly landline users. Add in the fact that it was conducted early in an election cycle, and this poll means very little.
Not all bad
Luckily, the use of weighted demographics has, for the most part, offset low response rates in terms of result accuracy.[contextly_auto_sidebar id=”pjZkHo5kvDXZPev9Ku4m4MVDmgUk79ib”]
Because of the predictable and polarized trends among age, race, and location, pollsters can assign value to certain demographics that are over or underrepresented to fill in the larger picture.
This tends to work well for general elections, when a larger turnout and firmer decisions are expected, but predicts outcomes less well in primary stages, when the turnout is poor and voters less sure.
Analysts also warn that in case of a shakeup election, in which demographics are split due to unexpected circumstances or unusual candidates, such weighting could prove disastrous.
Online polls are also growing in prominence, and according to Pew could be more accurate than phone surveys. Though most Americans are now Internet users, online samples are still nowhere near representative of the voting public, and margins of error impossible to calculate.
How to get a good reading
Disadvantages aside, polls are still the most accurate tool used to get a reading on public opinion, and a vital way to amplify the people’s voice pre-election instead of only those of journalists and politicians.
All polls have margins of errors, but, as the Guardian says, the numbers often matter less than the larger trend, which comes through with greater focus when comparing various poll results. It’s also worth remembering that in early stages opinions are in constant flux.
If nothing else, perhaps this will convince you to answer the phone next time a pollster gets a hold of your number. After all, the more samples these patient callers get, the more accurate results will be for everyone, and the less surprised we’ll be come election day.