On Politics On PoliticsOn Politics
Analysis and perspective about what's happening in the political realm.

Gallup’s Open Book


It isn’t unusual for individuals, organizations, or even political parties to make mistakes, but it is unusual, certainly in Washington, to see anyone own up to them. What is truly extraordinary is for any person or group to conduct, release, and publicize an exhaustive research project detailing what went wrong and why. In the past three months, we have now seen two groups do just that.

First, the Republican National Committee, chaired by Reince Priebus, issued its “Growth and Opportunity Project,” essentially an autopsy of what went wrong for the GOP in the last election. The report looked at mistakes the party made and what it needed to do for Republicans to get back on track. Now the Gallup Organization has released an executive summary of its own postmortem, which explains how its polling of likely voters ended up projecting a Mitt Romney win by 1 percentage point, 49 percent to 48 percent, in an election the Republican nominee ended up losing by almost 4 points (3.85 to be exact). Interestingly, Gallup’s final polling among all registered voters, as opposed to the smaller subset of voters judged to be the most likely to go to the polls, was much closer to the mark, showing President Obama ahead by 3 points, 49 percent to 46 percent.

In fairness, all of the major media polls underestimated Obama’s victory margin. Some showed Obama winning by only 1 or 2 points; others had him ahead by 3 points. Still others had the race tied, and a couple of others had Romney leading. In fact, the closest to the pin was a Democracy Corps poll conducted by Democratic pollster Stan Greenberg’s firm, Greenberg Quinlan Rosner Research, which showed Obama winning by 4 points. Gallup not only owned up to being the furthest off of the big-name pollsters but also announced just after the election it would figure out why.

In an hour-and-a-half session this week, Gallup Editor in Chief Frank Newport and a team of survey-research methodologists from three universities led by the University of Michigan’s Michael Traugott walked reporters through their findings. The report was the result of a comprehensive effort to take apart and reverse-engineer Gallup’s election-polling procedures.

The team, which also included an array of Gallup’s own staff methodologists and statisticians, is continuing its research, but so far it has identified four problem areas—while noting that there was no single reason Gallup’s numbers were off. The biggest problem was Gallup’s screening questions, which the firm used to ascertain who would be the most likely to cast ballots. For many years, Gallup has relied on seven questions to decide who makes the cut for “likely voters.” The firm asked such questions as, “How much thought have you given to the election?” and “Do you happen to know where people who live in your neighborhood go to vote?” Gallup also asked respondents whether they had voted before; how often they had voted; whether they voted in 2008; and to rate themselves on a scale of 1 to 10 in terms of their likelihood of voting. The seven questions, along with one additional query asking respondents if they were registered to vote, constituted Gallup’s screen for “likely voters.” Most pollsters use fewer questions, and, in polling on the state, local, or congressional district level, some pollsters use lists based on voter rolls so they can tell which individuals are high-propensity voters, but that tool is not available nationally.

The report found that “Gallup’s likely-voter procedures in 2012 shifted the race 4 points in Romney’s favor, 1 point more in Romney’s favor that the average shift among other polls for which registered-voter and likely-voter information is available.” According to the study, questions about voting history and the “thought given to the election” accounted for essentially all of the difference between Gallup’s final numbers and those of other firms that released both registered- and likely-voter numbers separately.

The researchers also identified three other problems, but each was significantly less important than the likely-voter screening issue. One was Gallup’s switch during the election year from random-digit-dialing sampling among landline respondents to sampling only among those with listed phone numbers. This resulted in a sample of landline users who were older and more Republican than samples generated through the random-digit-dialing approach. Another problem was even more arcane, concerning the distribution of the sample within regions and time zones. The final issue had to do with how Gallup asked and handled race and ethnicity questions.

Few people watch Gallup and other pollsters more closely than I do, and I confess that during the month or so leading up to Election Day, I found myself paying far more attention to Gallup’s daily registered-voter numbers than to its likely-voter numbers. Something just felt wrong about the latter. To determine better ways to prevent these mistakes, Gallup and its outside team of academicians will be conducting extensive polling in this year’s New Jersey and Virginia gubernatorial races, matching up various screening questions and techniques with a postelection examination of who actually voted. (No results will be released until after the elections.)

It had to be painful for the Gallup folks to air their dirty laundry, but the firm has a prestigious and valuable brand to protect, and its results in 2016 will be scrutinized more than ever. Gallup has to get it closer to right next time.

This article appears in the June 8, 2013, edition of National Journal.

Close [ x ] More from GovExec

Thank you for subscribing to newsletters from GovExec.com.
We think these reports might interest you:

  • Sponsored by G Suite

    Cross-Agency Teamwork, Anytime and Anywhere

    Dan McCrae, director of IT service delivery division, National Oceanic and Atmospheric Administration (NOAA)

  • Data-Centric Security vs. Database-Level Security

    Database-level encryption had its origins in the 1990s and early 2000s in response to very basic risks which largely revolved around the theft of servers, backup tapes and other physical-layer assets. As noted in Verizon’s 2014, Data Breach Investigations Report (DBIR)1, threats today are far more advanced and dangerous.

  • Federal IT Applications: Assessing Government's Core Drivers

    In order to better understand the current state of external and internal-facing agency workplace applications, Government Business Council (GBC) and Riverbed undertook an in-depth research study of federal employees. Overall, survey findings indicate that federal IT applications still face a gamut of challenges with regard to quality, reliability, and performance management.

  • PIV- I And Multifactor Authentication: The Best Defense for Federal Government Contractors

    This white paper explores NIST SP 800-171 and why compliance is critical to federal government contractors, especially those that work with the Department of Defense, as well as how leveraging PIV-I credentialing with multifactor authentication can be used as a defense against cyberattacks

  • Toward A More Innovative Government

    This research study aims to understand how state and local leaders regard their agency’s innovation efforts and what they are doing to overcome the challenges they face in successfully implementing these efforts.

  • From Volume to Value: UK’s NHS Digital Provides U.S. Healthcare Agencies A Roadmap For Value-Based Payment Models

    The U.S. healthcare industry is rapidly moving away from traditional fee-for-service models and towards value-based purchasing that reimburses physicians for quality of care in place of frequency of care.

  • GBC Flash Poll: Is Your Agency Safe?

    Federal leaders weigh in on the state of information security


When you download a report, your information may be shared with the underwriters of that document.