Analysts urge fine-tuning of performance measurement systems

Effort to link results to budget decisions questioned.

Performance measurement systems should incorporate the input of front-line federal employees and use innovative measures to accurately measure successes and failures, analysts said at a forum in Washington last week.

At the event, sponsored by the Georgetown Public Policy Institute and consulting firm Accenture, Paul Light, a professor of public service at New York University, noted that the Office of Management and Budget continues to press the issue of requiring agencies to report on the performance of federal programs through its traffic-light-style grading system. But, he asked, "Does Congress care? Does the public care? Does it matter if you're getting to green, or going to hell?"

Gary Parston, director of the Accenture Institute for Public Service Value, added another question to that list.

"How do we move beyond measurement to management?" he asked.

Gary Bass, an affiliated professor at the Georgetown institute and executive director of the nonprofit monitoring group OMB Watch, said performance measurement programs should be aimed at helping agencies do their jobs better, not at determining which programs should be cut and which should get more funding. A focus on budgeting rather than results can harm performance measurements efforts, he argued.

"I think it's incredibly frustrating to hear Congress speak out of both ends of their mouth," Bass said. "We have a Congress that says we want all this measurement, we want everything documented, but then cutting spending on research. That is totally frustrating. You end up having to go with the best measurements available."

Even if there is appropriate funding for measurement programs and a focus on improving performance, the panelists said, government programs frequently can't be evaluated in simple terms.

Beryl Radin, a scholar in residence at the American University School of Public Affairs, pointed to block grants and scientific research as two kinds of government work that defy conventional assessment.

"Block grant programs…. are not programs that are delivered by federal officials. They are officially designed to give discretion to state, local and third-party players," she said. "Research and development programs, whether it's the science programs in the [Environmental Protection Agency] or [National Institutes of Health] or [National Science Foundation], it makes no sense to evaluate them, number one, on an annual basis and, number two, when we know that the most effective research results from what we could call failures."

Radin said tailoring performance measurement and management to each agency and program was important not only for capturing the correct information but for ensuring that employees buy into organizational goals.

"If we could figure out a way to think about each agency trying to grapple with issues involving outcomes and performance measures in terms that make sense to their culture and their reality, that's what we want to get," Radin said. "When it's coming from the White House -- and it doesn't matter who is sitting in that White House -- I think you're going to get perverse responses that don't take us further down that road."

Another challenge, Accenture's Parston acknowledged, is that goals change as presidents come and go.

"If a political administration decides that a particular outcome isn't important, then that's a democratically mandated directive that the agency has to adhere to. If we're not there to prevent obesity any longer, we're not there to prevent obesity," Parston said.

All the participants agreed that performance measurement data could not produce management strategies aimed at improving performance unless that data was used. Marcus Peacock, deputy administrator of EPA, said one of the agency's regions improved its efforts at controlling water quality by looking at groups of watersheds and using them to predict results, rather than going watershed by watershed, so they could clean more rivers and streams more quickly.

Peacock cautioned that performance measurement was still in its nascent stages, and compared the evolution of systems to the development of the eye from simple light sensitivity to a highly adapted tool.

"We're still swimming around in the ocean. I'm still trying to figure out if there's light or not," he said.