Picking up where the Bush team left off in setting up meaningful performance metrics.
By their very definition, metrics are supposed to be standard and provide a way to measure results despite changing environments. But as it turns out, metrics for measuring federal programs are as likely to change as White House curtain colors or presidential puppies.
In establishing its Program Assessment Rating Tool, the Office of Management and Budget under President Bush toiled to develop meaningful metrics that could be applied to programs across government-acknowledging the extensive challenges.
In a 2003 PART guidance document, OMB officials compared defining the right performance measures with talking to a 4-year-old. "Whatever you say, the response is always 'Why? Why? Why?' . . . getting to a good measure can often grow out of asking why a certain activity, input or output is important and what it is really trying to achieve that matters to the public." Both supporters and critics of the PART system credit the Bush administration for "why," but differ on how much progress it made.
PART was the most comprehensive, transparent assessment of program performance the federal government ever conducted, according to Robert Shea, who managed the program as OMB's associate director for administration and government performance during the Bush administration. But, he acknowledges, that doesn't mean it's perfect. "Could we get more input? Could it be more open? Could it elevate the focus on results to an even greater degree? Yeah," says Shea, who now is director of the global public sector at accounting firm Grant Thornton LLP.
That, essentially, is what the Obama administration says it wants to do. OMB Director Peter R. Orszag has called PART well-intentioned but flawed, and says the agency is overhauling its performance metrics system. The man overseeing that overhaul is Jeffrey Zients, OMB deputy director for management and chief performance officer.
In advance of his confirmation hearing, Zients told lawmakers the primary federal management challenge the administration faces is to restore the American people's faith in government to perform effectively, efficiently and transparently. "To do that, we need to make significant and measurable progress in creating an outcome-oriented performance measurement program," he wrote in his pre-hearing questionnaire.
Zients credited PART with measuring performance at the program level for the first time, but said the system did not lead to an increased use of performance information in the decision-making process. Like Orszag, Zients also criticized PART for failing to establish sufficient outcome-based metrics.
If PART failed in this respect, it was not for lack of trying. The Bush administration's OMB issued a number of memos detailing the difference between outputs (the goods and services produced by a program and provided to the public or others) and outcomes (the intended result or consequence of carrying out a program or activity). PART memos acknowledged federal executives are more likely to manage against outputs because they are more easily measured and can be more directly controlled.
Despite this reality, Bush administration OMB officials urged agencies to establish metrics to measure outcomes. "The PART strongly encourages the use of outcomes because they are much more meaningful to the public than outputs, which tend to be more process-oriented or means to an end," a 2003 memo states. "Outcomes may relate to society as a whole or to the specific beneficiaries of programs, depending on the size and reach of the program."
Zients says metrics should evaluate the results programs deliver for the public. The similarity between that statement and those made by the Bush team could serve as a warning to Zients that establishing solid performance metrics is easier said than done.
John Mercer, government performance specialist and president of Strategisys LLC, says the Obama administration is making it clear it wants to put a greater emphasis not just on establishing metrics and collecting data, but also on using that performance information to improve programs. "But even political leadership at OMB in the previous administration wanted to do that and was frustrated that performance information wasn't being used more," Mercer says. "It's one thing to come up with an assessment of a program and evaluate it; it's another thing to have programs use that to improve performance."
Looming over the process of overhauling performance evaluation is the fear that OMB might do too good of a job establishing metrics. Employees at the Defense Contract Audit Agency, for example, say a draconian commitment to metrics caused its numerous and well-publicized problems. In the wake of a scathing July 2008 report from the Government Accountability Office showing that DCAA had neglected its oversight duties and developed an inappropriate relationship with industry, one 25-year veteran of the agency said, "In the end, defense contractors big and small are getting away with murder because they know we at DCAA are slaves to the metrics."
Zients acknowledged this concern before his confirmation, saying he would work with agencies and other stakeholders to develop a performance framework that not only lays out clear standards but also will help them advance. "My goal in taking on this role is to improve the performance of the federal government, not to increase the burden on federal program managers and distract them from effectively managing their programs," he says. "Creating a performance evaluation process that is too rigid or standardized might become a compliance exercise."