Promising Practices Promising PracticesPromising Practices
A forum for government's best ideas and most innovative leaders.

Does Performance Matter in Government?


I’m a fervent advocate of the use of performance information as an important management and decision making tool. So it’s not a natural act for me to raise this kind of question.

However, several articles in a recent issue of Public Administration Review have caught my attention and challenge some of my assumptions. 

Performance measurement is about measuring the performance of government and its programs—but what about measuring the effectiveness of performance measures themselves?  Ironically, it turns out that it is harder than you might think.

Business gurus Tom Peters and Peter Drucker have often been cited as saying: “What gets measured gets done.” That has been a mantra of many managers over the years. But the authors of these articles show that it isn’t that simple—and it may not even be true.

Why I’m An Enthusiast

I have seen cases where performance management has been used to help organizations focus on and achieve results. In recent years, the federal government has documented instances where the introduction of performance management systems has led to:

  • Reduced crime on Indian reservations
  • Increased use of electronic deposits by federal aid recipients
  • Reduced chronic homelessness among veterans
  • Reduced smoking
  • Improved health and education outcomes
  • Reduced wait times for government services

And when combined with analyses of non-performance data (such as weather or day of the week) and using evidence-based approaches with big data, it has led to the prediction and reduction of crime in local communities.

But shouldn’t there be a longer list?  Both the Chief Financial Officers Act of 1990 and the Government Performance and Results Act of 1993 presumed that if financial and performance information were made available to career managers and political decision makers, better decisions would get made. Reports over the past 20 years by the Government Accountability Office have found that only about one-third of managers in the executive branch use performance information to any great extent. The Congressional Research Service concludes that congressional use “remains to be seen.”

New laws, the GPRA Modernization Act of 2010 and the Digital Accountability and Transparency Act, have tried to bridge this lack of use. The first requires senior leaders to convene regular forums to have performance-related conversations and decisions, and public reporting on a more frequent basis. The second presumes that having more immediate, more granular financial information would be more useful to managers. 

This all brings us to a pair of recent academic articles. 

Measuring Doesn’t Matter (Part 1) 

Professor Ed Gerrish, at the University of South Dakota, examines in his article “the current state of performance management research” using quantitative techniques. He concludes: “The act of measuring performance may not improve performance, but managing performance might.”

To come to this conclusion, he conducted an Internet search based on selected key words that yielded 24,737 records, but only 49 were acceptable studies on public performance management to be included in final analysis. There were 2,188 “effect sizes” reflected in these 49 studies, and these “effects” were the units used in his analysis. The 49 studies were sorted by six policy areas (such as law enforcement, healthcare, and social services), then ranked by date of publication (1988-2014).

He found that the effect of just measuring performance was “typically considered to be negligible.” However, he found that: “performance management systems that use best practices are two to three times more effective than the ‘average’ performance management system.” Best practices included: training, leadership support, the systems are voluntary, and are mission-oriented.

Does the policy area matter as to whether the use of performance management is more effective? “It does not appear that the impact of performance management in particular policy areas is nearly as systematic as how they are implemented.”

“Meta-regressions find that performance management systems tend to have a small but positive average impact on performance in public organizations. . . . When combined with performance management best practices . . . the mean effect size is much larger; two or three times as large.” While this may seem small, “it is larger than other recent examinations of policy interventions.” 

So, the impact is at least positive. Still, Dr. Gerrish hedges his conclusions, noting: “there is still unexplained variation in the impact of performance management on performance in the public sector.”

In consulting with some colleagues and experts in the field, they noted that the studies were done before the Modernization Act took effect (which they think will have a positive impact on the use of performance information—hope springs eternal). They also noted that the literature review did not take into account studies undertaken by governmental audit or evaluation offices, or studies by think tanks (e.g., RAND, MITRE, etc.), where practitioners might come to different conclusions than academics.

Measuring Doesn’t Matter (Part 2) 

While the conclusions of the first article might be easily explained away, the conclusions of a second article are a bit more distressing. In this case, a pair of Dutch academics at Aarhus University, Martin Baekgaard and Soren Serritzlew, used experimental methods to determine “subjects’ ability to correctly assess information rather than on their attitudinal responses.”

Using an experimental design, they surveyed 1,784 Danish citizens, who were representative of the population as a whole.

They asked various subsets of citizens about their prior beliefs, and then presented them with unambiguous performance information and asked them to interpret the data. The survey asked them to interpret data involving a hip operation in a public versus a private hospital, and rate which hospital had performed best. They reversed the labels on the data for another subset of citizens (switched “public” for “private”) and re-ran the survey. They then asked another subset, with similar characteristics, the same performance questions, but in “Hospital A” vs. “Hospital B.”

The first subset of citizens was asked to interpret how often operations were performance with and without complications. The survey results concluded that “complications were much more likely to occur in the public than in the private hospital,” even though the data showed the reverse. Only when the data was labeled as covering “Hospital A” vs. “Hospital B” did the survey respondents correctly identify which hospital had better performance.

The authors noted: “The introduction of performance information in public administrations was based on the idea that information on organizational performance may improve decision making and ultimately lead to greater public value for taxpayer money.” However the results of their survey suggest that individuals’ existing political values and beliefs “may also affect individuals’ ability to correctly interpret even unambiguous performance information.”

They conclude: “The fact that performance information is systematically misinterpreted . . . calls into question the potential of performance information . . . The findings may also help explain why the introduction of performance information systems in many cases has limited effects on the actual performance of government institutions.”

They do posit “whether misinterpretations can be overcome, for instance, by presenting performance information in a different way.” And another academic article in another journal, by Dr. Donald Moynihan at the University of Wisconsin, does suggest that this might be the case—and the solution. And yet another academic, Dr. Beryl Radin at Georgetown University, warns: “While the concept of evidence-based decisions may have great appeal, information is rarely neutral and, instead, cannot be disentangled from a series of value, structural, and political attributes.” It’s all worth pondering.

But academics aside, practitioners might say that Winston Churchill’s diagnosis of democracy might well apply to performance management: “Democracy is the worst form of government, except for all the others.” 

John M. Kamensky is a Senior Research Fellow for the IBM Center for the Business of Government. He previously served as deputy director of Vice President Gore's National Partnership for Reinventing Government, a special assistant at the Office of Management and Budget, and as an assistant director at the Government Accountability Office. He is a fellow of the National Academy of Public Administration and received a Masters in Public Affairs from the Lyndon B. Johnson School of Public Affairs at the University of Texas at Austin.

Close [ x ] More from GovExec