Ten Years On, How Has the Federal Performance System Performed?

An early skeptic finds there’s evidence for optimism, and much the Biden administration can build on.

The chaotic last days of the Trump administration coincided with the 10th anniversary of the Government Performance and Results Act Modernization Act. Trump’s Office of Management and Budget director, Russell Vought, marked the occasion by undermining the law to the greatest degree possible: He removed OMB’s oversight role and agency performance reporting requirements. As Jason Miller, the former Obama administration executive tapped by President Biden to lead his management agenda, and the rest of the new OMB management team considers what to do next, the evidence suggests that Vought’s tossing the baby out with the bathwater was a significant mistake. 

Vought’s reasoning was that collecting the data was too burdensome, and that the public was not interested. This is simultaneously unsurprising and a red herring. Of course few members of the public spend their days looking at performance data on federal websites. And we’ve known for years that Congress is, at best, a haphazard user of performance information. 

But public or legislative indifference are not good reasons to abandon a performance system. For one thing, a goal of the law is transparency—the idea that we track the effects of public spending. For another, it is wrong to assume that just because most people aren’t avid consumers of performance data, the public does not care about government performance. They do care. Witness the recent uproar about performance problems with the U.S. Postal Service. People simply expect that federal employees use data wisely to improve services. And here at least, we have some evidence that the current system is working. 

Over the last 20 years I have studied patterns of performance information use among public officials. The research that I, and others, have done has gradually changed my mind about the U.S. performance system, converting me from a skeptic, who thought the costs outweighed the benefits, to a cautious optimist. Let's talk about that research.  

A 2016 multi-country World Bank study of performance systems identified recurring problems. For one thing, there is no magic bullet. Most performance systems start with great ambitions, promising a revolution in governance, and measure too many things. When those ambitions are not satisfied, disappointment follows, the system is abandoned, often followed by the imposition of a near-identical one on jaded bureaucrats a few years later. The costs of performance systems are real, but they’re much greater when they are never properly implemented. 

The systems that succeeded, like in the United States, have kept the same framework in place and adapted over time. Stability matters for a number of reasons. First, it establishes a clear commitment for employees. They know they cannot simply wait out an administration. Second, cultural change is slow, it takes years to push agencies to focus on results. Third, stability creates room to learn and improve. Countries that stuck with it eventually came to the same conclusion: the best, and perhaps only, purpose of performance systems is as an internal management tool.

My co-author Alexander Kroll and I have tracked how federal managers use performance data over time, using four waves of GAO survey data across 17 years, covering the original Government Performance and Results Act (GPRA), the Bush-era Program Assessment Rating Tool (PART), and GPRA Modernization Act (GPRAMA). 

What did we find? The current system GPRAMA had worked where GPRA and PART had failed: pushing managers to use performance data to make decisions.

To analyze the data, we looked at the types of organizational routines each reform created. This turned out to be the key difference. Both GPRA and PART created routines of performance reporting, but generated a passive response where agencies provided the data and did little else. GPRAMA created routines of performance information use, such as quarterly data-driven reviews, and the creation of cross-agency and agency priority goals. Such routines made federal managers talk to each other about performance. In previous work, we also found that GPRAMA prodded managers pay more attention to program evaluations, an important concern given the implementation of the Evidence Act

A key conclusion of our research was the pattern of incremental learning and improvement in the U.S. performance system mattered. PART was a response to GPRA, and GPRAMA took elements of Bush-era policies and put them into statute. For example, while performance was once the responsibility of an agency budget shop, GPRAMA empowered specialist Performance Improvement Officers to engage agency leaders on performance goals. Our research has consistently found that such leadership commitment makes a big difference as to whether the data is used or not. 

With 10 years from the passage of the GPRA Modernization Act, it is time for another careful reevaluation, but Vought’s evidence-free dismissal of the system is anything but that. A good starting point would be to acknowledge where GPRAMA has worked, build on those successes, and set realistic goals for improvement. 

Andy Feldman and Kathy Stack identify such goals, which can be achieved within the current law with willing leadership. The current system already allows for data-driven reviews and cross-agency goals needed to drive performance. Data quality is always an issue, as is connecting the performance system to White House goals. But these are arguments for maintaining an OMB role in pushing agencies to improve data quality, align with presidential priorities, and minimize duplication of goals. The Biden team can usefully frame OMB’s role by asking in broader terms how the existing performance system can facilitate the sort of tangible improvements in service quality and public outcomes that Biden has promoted in his campaign and early Executive Orders.  

Donald Moynihan is the inaugural McCourt chair at the McCourt School of Public Policy, Georgetown University.