What do we have to show for more than two decades of focus on results?
For more than 20 years, since the passage of the 1993 Government Performance and Results Act, federal agencies have been under the obligation to establish strategic plans, set goals and measure their progress toward achieving them. Their obligations were further refined with the passage of the Government Performance and Results Act Modernization Act in 2010.
So, where do we stand? Has government performance improved as a result?
The fact that no one really knows says a lot about how efforts to focus federal agencies and managers on delivering results have worked out.
In a hard-hitting paper in the West Virginia Law Review, Seth Harris, who served as deputy Labor Secretary from May 2009 to January 2014, argues that there are several fundamental problems with the government’s approach to performance measurement and reporting. Two key issues stand out:
- Congress has exempted itself from the obligation of conducting effective oversight of performance.
- Agencies and managers are not held accountable in a meaningful way for achieving performance goals.
In the paper, Harris laments his inability to get anyone on Capitol Hill to show any interest in reviewing Labor’s annual performance index:
Beginning in fall 2013, the Labor Department’s congressional affairs office sought to organize a meeting with congressional staff at which I would brief them on the latest index and discuss the department’s performance management program. How many congressional staff agreed to attend the briefing? Zero. Based on prior experience, I considered that response to be an accurate representation of their bosses’ interest in the topic.
Likewise, Harris reports, during the five years he held the second-ranking post at Labor (and served as the department’s chief operating officer) he was never called to testify before the department’s operating committees or appropriations subcommittees about Labor’s performance.
Part of the problem is with the way GPRA and GPRAMA are structured, Harris argues. GPRA, for example, provides little in the way of a definition of success. On the other hand, it includes detailed process requirements for publishing strategic plans and annual performance reports.
Harris has this to say about Labor’s annual report:
In a meeting early in 2009 with the department’s central performance staff, I asked if the glossy, aesthetically pleasing, award-winning report was a “report to report,” or a report that was used to manage. Sheepishly, the staff admitted it was a “report to report” and that managers did not use it to assess or improve their agency’s performance.
Even with the improvements of GPRAMA, Harris argues, the system is still not structured to encourage meaningful, measurable performance improvements, Harris says:
There are no consequences for an agency’s failure to comply [with performance laws] or for its poor performance. Budgets are not cut, and programs are not eliminated. Appropriations decisions in Congress are driven more by ideology and constituency politics than evidence. … There is no requirement that political appointees and senior career leaders suffer discipline for a failure to comply with GPRA or poor programmatic or departmental performance.
One result of this situation is that more than two decades in, the universe of people who genuinely care about government performance is and remains very small. The process of setting goals and measuring performance has been delegated to a group of dedicated folks who by necessity spend much of their time assuring compliance with congressional mandates and administration directives.
In some parts of government, performance has generally improved as a result. In others, serious problems remain. But in vast swaths of the bureaucracy, it’s very difficult to tell whether things are getting better or worse. And the only time the topic becomes the subject of public debate is when an agency or individual screws up.