Why the Polls Miss the Mark (Part 5)

For the next few weeks, we’re shifting the focus of our blog from examining legislation to discussing issues related to public opinion polling.

With the 2018 mid-terms approaching it’s important to know about the pitfalls polls face and why their findings shouldn’t be taken at face value. Our founder has written an extensively researched book on this subject, due to be released next month, which explains both the issues in detail and what it would take to truly address them. This series consists of sections from that book.


Any given question alters a respondent’s mindset by creating cognitive anchors, which leads to biases for subsequent questions. Priming people to think a certain way by inducing certain emotions is an inherent problem with the question-and-answer format used in survey research. Since the effect is cumulative, answers to questions at the end of longer surveys may be significantly distorted.

Response priming is the reason why the presidential approval question is almost always asked first in political polls. Placing that question after a series of other ones about specific policies gives additional weight to those policies specifically mentioned in the mind of the respondent.

As a thought experiment, consider a hypothetical survey about Richard Nixon seeking to gauge how positively or negatively he is remembered by those born after his death in 1994. This survey has three questions:

  1. Do you approve or disapprove of former President Richard Nixon’s job performance?
  2. Do you approve or disapprove of former President Richard Nixon’s actions during the Watergate scandal?
  3. Do you approve or disapprove of former President Richard Nixon’s actions to end the Vietnam War?

Leaving the questions in the order above would provide a clean answer to Nixon’s overall job performance, as would flipping the positions of the Watergate and Vietnam questions. Any other question order would likely result in tainted responses to the job performance question due to the anchoring effect of response priming.

On one hand, asking question 2 first and placing question 1 in the middle would elevate awareness of the scandal which resulted in Nixon’s resignation and would therefore give more weight to his negatives when the respondent then considers whether they generally approve or disapprove of his overall performance.

Inverting that order by placing question 3 at the top, 1 in the middle, and 2 at the bottom would produce the opposite results due to the same effect. By giving weight to the sentiment which propelled Nixon’s electoral success prior to asking about his performance, we should see higher overall levels of approval. Such an ordering would also tend to lower the level to which respondents disapproved of Nixon’s Watergate behavior–if primed to regard the 37th President in a more positive light, his actions may seem less significant.

This effect can be somewhat reduced by randomizing the order in which questions are presented. While any legitimate political poll will pose the general approval question first, respondents may be presented the following questions in a completely different order. Randomization does not eliminate response priming in the data from any one respondent, but is employed to reduce any cumulative effect in the overall sample.

In the Nixon survey, if one third of respondents are given the original 1-2-3 question order, one third are given the 2-1-3 order, and the remaining third see 3-1-2, the effects of response priming theoretically should balance themselves out. This does not mean they are eliminated, but rather that the aggregate net effects result in data equivalent to what it would have been if there had not been response priming present.


This post is an excerpt from our founder’s book Data in Decline: Why Polling and Social Research Miss the Mark, to be released October 2018, partially reformatted for this content medium

Part 4 of this series can be found here