Is President Obama up or down?: The effect of question wording on levels of presidential support

Presidential approval ratings are the most ubiquitous polling data out there. Every pollster worth their salt will have some derivation of the standard question—do you approve or disapprove of the way [president] is handling his job? For many, the approval question has become synonymous with ‘public opinion’. Some might argue that this is overly reductionist—that public opinion is a more complex phenomenon which shouldn’t and can’t be reduced to a single question[1]—but still the wider perception prevails. Approval ratings and ‘the will of the people’ are seen as one and the same.

‘Approval’s’ fame is not without merit however. First and foremost, approval ratings have shown themselves to be fairly good predictors of political outcomes, especially elections. The logic is simple and has been well documented. When approval ratings are high, the incumbent or their successor has a good chance of winning the elections, while the same favorable odds maintain themselves for opposition candidates when presidential approval ratings are in the gutter (see ‘Much ado about nothing: Obama will be president, again, in 2013’). Second, given their ‘magical’ predictive properties, approval ratings serve as important signals to economic and political actors about near and long-term political fortunes, including the relative chances of getting campaign financing, the eagerness of contenders to throw their name into the ring, and their impact on the corporate world’s risk mitigation and investment strategies. They are grist for the political mill.

Given their importance, approval ratings receive special scrutiny from political actors and poll watchers alike. A number of websites, such as RCP and Pollster.com, aggregate approval ratings, among other questions, and provide a running market average of public opinion’s relative bliss or fury. This is an important advancement over the single poll as the overall average minimizes sampling variability and the bias of any one survey shop. The ‘Law of Large Numbers’ at work—thank you Bernoulli!

Ipsos has tracked approval ratings in the US since 2001—first with AP, then with McClatchy, and now with Thomson Reuters. During this time, our polls have shown a consistent 2 to 4 point difference when compared to the market average (the average of all polls at the time). If we take specifically the Obama years, on average, our approval ratings have been 3 points more positive than the average (see table below).

Source: Ipsos, RCP, and Pollster.com. Universe all polls since February 2009

This small blip typically goes unnoticed—something that most would attribute to the margin of error of our poll. However, at critical points in time when the President is trending up or down, it does call the attention of concerned citizens and hate mail enthusiasts.

This all begs the natural question: why do our polls produce approval ratings slightly more favorable for Obama? We hypothesize two possibilities: (1) first, we have a problem with our sample composition—more Democrats and less Republicans or (2) second, that we measure presidential approval differently than other polling firms.

We quickly discarded the first hypothesis—that our sample is off, picking up more Democrats than Republicans. Indeed, our party identification closely mimics the market average (see table below)[2].

Source: Ipsos, RCP, and Pollster.com. Universe all polls in the last 12 months

If not sample composition, then might it be an issue with question wording? Here we asked two related questions. Do we ask presidential approval differently than other polling firms? And if so, might this account for our systematic difference from the market average?

The first answer is actually yes—we do ask presidential approval differently than what is typically employed by other polling firms. While there is some variation from polling firm to polling firm, for the most part, all ask some close derivation of the traditional Gallup question asked since the 1930’s: Do you approve or disapprove of the way [NAME OF PREISIDENT] is handling his job as President?[3]

In contrast, Ipsos asks presidential approval in a slightly different manner. Specifically, we include a ‘mixed feelings’ option, and then we ask a follow up question for those respondents to determine which way they ‘lean’. Respondents are then allocated to approve or disapprove based on their lean preference.

So does this question wording difference have an effect? To test this, we employed a split ballot design over three waves of our monthly dual frame telephone poll (October to December 2011) where we randomly allocated the Ipsos wording to half of the sample each wave, and the traditional Gallup wording to the other half. We allocated 1,500 interviews to each experimental condition for a total of 3,000 interviews.

So what did we find?

Well, as we expected, question wording indeed has an effect and in the expected direction. Our Ipsos wording did produce more favorable approval ratings for Obama than the traditional Gallup method—a full four points in total.

Additionally, we find that those “floaters” who migrate from the ‘mixed feeling’ middle category first tend to be more Democratic than Republican or Independent. Second, our floaters are also younger and less educated—a profile very close to that of non-responders and those less likely to vote.

Profile of Floaters (Log-odds)

Note: log odds derived from logistic regression estimates of moving from ‘mixed feelings’ to ‘approve’ of Obama

So what are the implications of our findings?

First and foremost, a great deal of relief—now we have concrete evidence explaining why our approval ratings trend more favorably than the market average. Phew!! This, of course, leads us to the natural question: which method produces more robust results? Ipsos’ or the market’s?

Some would argue that our method actually is producing “statistical artifacts” by forcing those who have no opinion to take a position. Put bluntly, one might argue that we are inventing public opinion out of whole cloth. This is the classic nonattitudes perspective which, in its varying forms, argues that a big chunk of the American populace just makes up answers to questions[4]. This argument is partially reinforced by our own findings which suggest that our floaters tend to have a traditional non-responder, ‘nonattituder’, nonvoter profile—less educated, less affluent, and younger. Here many would make the practical point that these people probably won’t vote, so why do their attitudes really matter anyway when looking at electoral outcomes?

Others would argue that our ‘floaters’ are people too: that while they might not have a well-articulated position, they do have some predispositions, or predilections, towards the issue at hand. Case-in-point are Democrats, who are much more likely to float than Republicans. It makes sense that Democrats would be more likely to support Obama than Republicans, right? Reinforcing this point is considerable basic research in political, cognitive, and neuropsychology which shows that people often have attitudes without being consciously aware of them—a functional evolutionary adaption to avoid information overload [5]. This perspective debunks the traditional “Conversian” non-attitudes argument by showing that attitude formation is much more complex and cognitively efficient than we ever thought.

So with this evidence, what should Ipsos do? Change the wording or keep the question as is?

Any measurement decision should come down to whether we believe that we are adequately capturing the ‘voice of the people’. Here I believe we are; though perhaps in a slightly different way than the market. We might even make the argument that our method captures the voice of more “peoples” than the traditional method by inviting habitual nonresponders to respond.

Ultimately though, this exercise might say much more about the aggregating vortex of the internet than anything else. Indeed, no one would argue against the wonderful analytic benefits of aggregator sites, like RCP and Pollster.com—I, for one, am a big fan and user. At the same time, they are powerful homogenizing forces in our industry—if you are similar to the market average you are good; if not, you are bad. My only retort to this is that sometimes being average is, well, just average.

NOTE: This article is based on a 2012 AAPOR paper. Young, Clark, and El-Dash “Is President Obama Up or Down? The Impact of Question Wording on Approval Ratings” 67th Annual AAPOR Conference, 2012, Orlando, Florida

 

 


[1] Converse, P.E. (1987) Changing Conceptions of Public opinion in the Political Process Public Opinion Quarterly, Volume 51, Issue part 2L Supplement: 50th Anniversary Issue, S12-S24

[2] Our sample composition analysis includes both univariate and multivariate analysis of both demographic and attitudinal variables. The analysis underscores that our sample is equivalent to the market average. I thought party identification the simplest and most intuitive way to summarize this analysis.

[3] Note wording of the ‘traditional Gallup’ question does vary across survey shop. However, the one characteristic in common is that they don’t push leaners or undecideds.

[4] Converse, Philip E. (1964). “The Nature of Belief Systems in Mass Publics.” In Ideology and Discontent, ed. David Apter. New York: Free Press; Converse, P.E. (1970). Attitudes and Nonattitudes: Continuation of a Dialogue. In: Tufte, E.R. (ed.): The quantitative Analysis of Social Problems. Reading: Addison-Wesley. 168-189; George F. Bishop (2004) The Illusion of Public Opinion: Fact and Artifact in American Public Opinion Polls. Lanham, MD: Rowman and Littlefield Publishers.

[5] Paul M. Sniderman, Richard A. Brody, Phillip E. Tetlock (1991)Reasoning and Choice: Explorations in Political Psychology (Cambridge Studies in Public Opinion and Political Psychology); Greenwald, A.G., & Banaji, M.R. (1995). “Implicit social cognition: Attitudes, self-esteem, and stereotypes.” Psychological Review, 102(1), 4-27; Taber, Charles S. (2003) Information Processing and Public Opinion. In Handbook of Political Psychology, Ed. David O. Sears, Leonie Huddy and Robert L. Jervis, 433-76. London: Oxford University Press

 

Comments are closed.