In the reporting of public opinion, there are few widely-discussed concepts that are as confusing or misunderstood as “likely voters”. Many poll observers think likely voters are a hard and fast classification with clear definitions; this could not be further from the truth. The reality is the construction of likely voter identification is extremely variable and highly dependent on the individual pollster’s technique and skill. Likely voter classifications are very fluid and have a large impact on polling results, particularly of political “horse-race” questions.
I’ve discussed the impact of reporting on different populations (all adults vs. registered voters vs. likely voters) in an earlier blog post (link). To summarize, as we move from all adults to registered voters to likely voters, we are reporting smaller and smaller portions of the electorate. And, due to the demographic makeup of the voting public, as we move to smaller populations we get a voting population that is more hospitable to Republicans. For that reason, we often see Republicans do significantly better in polls of likely voters than in polls of registered voters or all adults.
In this post, I’m going to describe the way Ipsos constructs likely voter identification for our electoral polls. In separate posts, I’ll examine the impacts and implications of our likely voter model, and some ways we might improve how we report on likely voter surveys.
First off, since we are interviewing people, we have to rely on what they recall and are willing to tell us. This presents the first challenge of figuring out who is actually going to vote. People are not good at anticipating what they will do in the future and are not always reliable when talking about what they’ve done in the past. For example, people frequently say they are going to vote but then on Election Day, their kid gets sick, they have a deadline at work, or they are tired, and the candidates are all boring. Whatever the reason, they say they will vote but when it comes down to Election Day, they don’t show up and they don’t vote.
So if people are unreliable, how can we get a reliable measure of who’s likely to show up and vote? The traditional approach – often referred to as the Gallup model (link) – asks people a couple of different questions about their interest in the election and voting. The idea is that someone less committed to voting might answer in the affirmative on one question, but if you ask several follow-up questions on how much they follow politics and elections, their lack of interest will become clear.
A slightly more sophisticated approach – often referred to as the CBS/New York Times model – uses a multi-question battery (like Gallup) and asks the respondent if they have voted in one or more past elections. The idea here is that past voting behavior is a strong predictor of voting in the future. In both cases, Gallup & CBS/NYT, the research will set a threshold for qualification as a likely voter. That is, if someone says they are very likely to vote AND very interested in the election AND frequently follow political news AND voted in the last election, then they are considered to be likely to vote.
Ipsos does things a little differently. We start with the CBS/NYT approach and add our own twist. We use five questions – listed below – but instead of setting a cutoff, we assign a value to each response. Answers that indicate a respondent is more interested in voting or the election earn a higher score than answers that indicate disinterest. We then combine the values on the individual questions to make an index score for each person in our survey. This index score represents the odds that that particular individual will show up to vote on a graduated spectrum from no chance they’ll vote to practically certain they’ll vote.
The chart below illustrates the average likely voter index score for our September – October 2014 survey participants in 10% increments of the population This illustrates that about 30% of people are die-hard voters, and everyone else is on a spectrum of more to less likely to vote. This is a simplified depiction of our data; our actual approach allows us to put every individual on a broad rating scale.
We also have a different approach to the application of data. Most media pollsters conduct their interviews via telephone, where each and every minute an operator spends on the phone has a significant cost. Media pollsters often do not interview people that don’t meet their criteria of likely voters because they don’t want to spend money on interviews they won’t use. Ipsos is different. Since we interview online, and have relatively little additional cost, we interview everyone, likely voters and unlikely voters alike., Practically, this means that instead of surveying a subset of the population that we think (guess/hope) match the actual population of voters, we talk to everyone and can, after the fact, adjust our model of likely voters to best reflect reality.
The third way we are different in our construction of likely voter models is in how we adjust our model based on actual Election Day turnout expectation. In any given American election, between 35% and 70% of eligible voters actually show up to vote. The challenge is, although we usually know the ballpark of the expected turnout (i.e. near 40%), actual turnout is a mystery that is only revealed on Election Day. Other pollsters make educated guesses on turnout then are stuck with their results. But we talk to everyone so we can adjust our models on the fly to match the latest information on turnout. If evidence suggests turnout is going to be 42%, we are able to identify the 42% of our survey respondents that are most likely to vote and we report on them. If turnout is expected to change, we immediately can change our likely voter model to match by including classifying more respondents as likely voters.
To summarize the Ipsos approach to likely voters:
- We use a modified CBS/New York Times approach with multiple questions on interest and past voting behavior;
- We interview everyone and assign everyone a “likelihood to vote” score based on how they respond to our voter questions.
- We report on the number of people most likely to vote based on actual turnout expectations. If turnout is expected to change, we can change the likely voter model.