Why the Polls Miss the Mark (Part 4)

For the next few weeks, we’re shifting the focus of our blog from examining legislation to discussing issues related to public opinion polling.

With the 2018 mid-terms approaching it’s important to know about the pitfalls polls face and why their findings shouldn’t be taken at face value. Our founder has written an extensively researched book on this subject, due to be released next month, which explains both the issues in detail and what it would take to truly address them. This series consists of sections from that book.


While more data is collected and the respondent pool is less structurally prone to bias when interviews are conducted by a live person, there is some indication that data quality can suffer when respondents provide information to another person rather than to a computerized system. One factor which is more pronounced in person-to-person data collection is the Bradley Effect, sometimes called the Wilder Effect.

Named for former Los Angeles Mayor Tom Bradley, this phenomenon focuses on racism as an electoral motivator – it occurs when a white candidate is running against a non-white candidate, and some white survey respondents who are racist attempt to hide their racism by telling pollsters they support the non-white candidate when in fact they cast votes for the white one.

This phenomenon has been subject to several criticisms. The polls from where the effect’s name originates, conducted on the 1982 election for Governor of California, were subject to a number of flawed practices. One of these was the same timing problem which led to Gallup’s blown call in the 1948 Presidential race between Harry Truman and Thomas Dewey as well as a failure to account for an unexpectedly large number of absentee ballots cast. The elections of Barack Obama also led researchers to discount its effects.

Although the Bradley Effect has largely been written off by social scientists, the term has evolved to essentially cover all cases in which respondents lie or otherwise deliberately provide false data to pollsters. The concept continues to live on because the general principle of survey respondents misinforming interviewers has seemingly manifested in other forms.

The Shy Tory Factor is one of those manifestations, one which focuses on political parties and philosophies in general rather than specific individuals. This phenomenon was first discovered in Great Britain, where it was found that Conservative voters may refuse to answer pollsters honestly, indicating that they supported the Tory party less than they did. This effect has also been found to understate support for the Republican Party in the United States.

As a result, candidates and issues favored by the right in each of these nations have historically performed better than opinion polls indicate that they should. This makes sense to some extent—it stands to reason that when respondents are already disinclined to trust pollsters, there is little incentive for them to provide honest answers.

The pervasive nature of echo chambers and the confirmation bias that accompanies them means that those predisposed to believing that their views will not be accurately represented in research findings can circulate amongst themselves evidence of instances where they can accurately claim this to be the case, continually reinforcing this belief. The confluence of these phenomena serves to amply the effects of the Shy Tory Factor to the point where the data collected may consistently misrepresent reality.


This post is an excerpt from our founder’s book Data in Decline: Why Polling and Social Research Miss the Mark, to be released October 2018, partially reformatted for this content medium

Part 3 of this series can be found here