For Good Measure

Picking up where the Bush team left off in setting up meaningful performance metrics.

By their very definition, metrics are supposed to be standard and provide a way to measure results despite changing environments. But as it turns out, metrics for measuring federal programs are as likely to change as White House curtain colors or presidential puppies.

In establishing its Program Assessment Rating Tool, the Office of Management and Budget under President Bush toiled to develop meaningful metrics that could be applied to programs across government-acknowledging the extensive challenges.

In a 2003 PART guidance document, OMB officials compared defining the right performance measures with talking to a 4-year-old. "Whatever you say, the response is always 'Why? Why? Why?' . . . getting to a good measure can often grow out of asking why a certain activity, input or output is important and what it is really trying to achieve that matters to the public." Both supporters and critics of the PART system credit the Bush administration for "why," but differ on how much progress it made.

PART was the most comprehensive, transparent assessment of program performance the federal government ever conducted, according to Robert Shea, who managed the program as OMB's associate director for administration and government performance during the Bush administration. But, he acknowledges, that doesn't mean it's perfect. "Could we get more input? Could it be more open? Could it elevate the focus on results to an even greater degree? Yeah," says Shea, who now is director of the global public sector at accounting firm Grant Thornton LLP.

That, essentially, is what the Obama administration says it wants to do. OMB Director Peter R. Orszag has called PART well-intentioned but flawed, and says the agency is overhauling its performance metrics system. The man overseeing that overhaul is Jeffrey Zients, OMB deputy director for management and chief performance officer.

In advance of his confirmation hearing, Zients told lawmakers the primary federal management challenge the administration faces is to restore the American people's faith in government to perform effectively, efficiently and transparently. "To do that, we need to make significant and measurable progress in creating an outcome-oriented performance measurement program," he wrote in his pre-hearing questionnaire.

Zients credited PART with measuring performance at the program level for the first time, but said the system did not lead to an increased use of performance information in the decision-making process. Like Orszag, Zients also criticized PART for failing to establish sufficient outcome-based metrics.

If PART failed in this respect, it was not for lack of trying. The Bush administration's OMB issued a number of memos detailing the difference between outputs (the goods and services produced by a program and provided to the public or others) and outcomes (the intended result or consequence of carrying out a program or activity). PART memos acknowledged federal executives are more likely to manage against outputs because they are more easily measured and can be more directly controlled.

Despite this reality, Bush administration OMB officials urged agencies to establish metrics to measure outcomes. "The PART strongly encourages the use of outcomes because they are much more meaningful to the public than outputs, which tend to be more process-oriented or means to an end," a 2003 memo states. "Outcomes may relate to society as a whole or to the specific beneficiaries of programs, depending on the size and reach of the program."

Zients says metrics should evaluate the results programs deliver for the public. The similarity between that statement and those made by the Bush team could serve as a warning to Zients that establishing solid performance metrics is easier said than done.

John Mercer, government performance specialist and president of Strategisys LLC, says the Obama administration is making it clear it wants to put a greater emphasis not just on establishing metrics and collecting data, but also on using that performance information to improve programs. "But even political leadership at OMB in the previous administration wanted to do that and was frustrated that performance information wasn't being used more," Mercer says. "It's one thing to come up with an assessment of a program and evaluate it; it's another thing to have programs use that to improve performance."

Looming over the process of overhauling performance evaluation is the fear that OMB might do too good of a job establishing metrics. Employees at the Defense Contract Audit Agency, for example, say a draconian commitment to metrics caused its numerous and well-publicized problems. In the wake of a scathing July 2008 report from the Government Accountability Office showing that DCAA had neglected its oversight duties and developed an inappropriate relationship with industry, one 25-year veteran of the agency said, "In the end, defense contractors big and small are getting away with murder because they know we at DCAA are slaves to the metrics."

Zients acknowledged this concern before his confirmation, saying he would work with agencies and other stakeholders to develop a performance framework that not only lays out clear standards but also will help them advance. "My goal in taking on this role is to improve the performance of the federal government, not to increase the burden on federal program managers and distract them from effectively managing their programs," he says. "Creating a performance evaluation process that is too rigid or standardized might become a compliance exercise."

Stay up-to-date with federal news alerts and analysis — Sign up for GovExec's email newsletters.
FROM OUR SPONSORS
JOIN THE DISCUSSION
Close [ x ] More from GovExec
 
 

Thank you for subscribing to newsletters from GovExec.com.
We think these reports might interest you:

  • Sponsored by G Suite

    Cross-Agency Teamwork, Anytime and Anywhere

    Dan McCrae, director of IT service delivery division, National Oceanic and Atmospheric Administration (NOAA)

    Download
  • Data-Centric Security vs. Database-Level Security

    Database-level encryption had its origins in the 1990s and early 2000s in response to very basic risks which largely revolved around the theft of servers, backup tapes and other physical-layer assets. As noted in Verizon’s 2014, Data Breach Investigations Report (DBIR)1, threats today are far more advanced and dangerous.

    Download
  • Federal IT Applications: Assessing Government's Core Drivers

    In order to better understand the current state of external and internal-facing agency workplace applications, Government Business Council (GBC) and Riverbed undertook an in-depth research study of federal employees. Overall, survey findings indicate that federal IT applications still face a gamut of challenges with regard to quality, reliability, and performance management.

    Download
  • PIV- I And Multifactor Authentication: The Best Defense for Federal Government Contractors

    This white paper explores NIST SP 800-171 and why compliance is critical to federal government contractors, especially those that work with the Department of Defense, as well as how leveraging PIV-I credentialing with multifactor authentication can be used as a defense against cyberattacks

    Download
  • Toward A More Innovative Government

    This research study aims to understand how state and local leaders regard their agency’s innovation efforts and what they are doing to overcome the challenges they face in successfully implementing these efforts.

    Download
  • From Volume to Value: UK’s NHS Digital Provides U.S. Healthcare Agencies A Roadmap For Value-Based Payment Models

    The U.S. healthcare industry is rapidly moving away from traditional fee-for-service models and towards value-based purchasing that reimburses physicians for quality of care in place of frequency of care.

    Download
  • GBC Flash Poll: Is Your Agency Safe?

    Federal leaders weigh in on the state of information security

    Download

When you download a report, your information may be shared with the underwriters of that document.