Pioneering Performance

fter years of failed attempts by agencies to link resources to results, the buck has stopped at the Office of Management and Budget. OMB Director Mitch Daniels believes every budget examiner has a role in weaving performance data into the president's fiscal 2004 budget. Daniels is trying to jump-start performance budgeting efforts by forming an advisory council of outside experts to advise OMB on performance budgeting. In addition, Daniels has asked OMB examiners to score at least 20 percent of programs at each agency using OMB's new Program Assessment Rating Tool. Carl D. DeMaio is a senior fellow at Reason Public Policy Institute and director of the Performance Institute. Contact him at .
a

Performance budgeting-one of the five key goals in President Bush's management agenda-is nothing new. Agencies have been trying to do it for years, but most of their budget submissions still fail to clearly link their funding requests to program performance.

During the fiscal 2003 process, the administration used a traffic light scoring system to grade agencies on five broad performance budgeting criteria. Those criteria examined broad issues such as integrating planning and budgeting functions, devising a results-oriented strategic plan, realigning budget accounts with performance goals, charging full budgetary costs to performance goals, and demonstrating use of performance in making decisions. OMB gave all but a handful of agencies red lights for their failure to meet the goals.

The traffic lights helped focus attention on the president's goals, but they were too broad to affect budget decisions, which are made on a line-item basis. Moreover, some observers said the grading system was too subjective.

The daunting task of evaluating program-specific submissions was a problem, too. OMB assembled a small task force of OMB examiners under the supervision of Associate OMB Director Marcus Peacock to administer the ratings. Working tirelessly, the task force pored over reports from inspectors general and the General Accounting Office-as well as the performance data provided by the agencies. Moreover, the handful of budget examiners assigned to the effort could evaluate only a fraction of federal programs.

The program assessment tool for fiscal 2004 will help remedy some of these problems by providing more objective criteria for program-specific evaluations. The criteria include 20 weighted questions across four areas: program relevance, strategic planning, program management and program results.

To allow for cross-program comparisons of effectiveness, OMB has launched a "common measures" project for programs that address health care, wild land fire management, job training and employment, housing assistance, flood mitigation, environmental protection and disaster insurance.

Recognizing that performance budgeting requires resources, OMB has put more people on the case and will have six performance budgeting experts provide independent advice from outside government.

OMB's use of the Program Assessment Rating Tool, advice from the advisory council, and common measures won't be perfect, just as public sector budgeting will never be perfect. Nevertheless, using a standardized set of criteria, tapping outside expertise, and comparing the relative effectiveness of similar programs are steps in the right direction.

To make these tools work, OMB will have to overcome a number of obstacles. Among its own management challenges, some examiners still do not understand the difference between traditional (input-based) budget reviews and performance-based budget reviews. As one former OMB examiner insists, "We've always looked at performance when we review agency budgets. There's nothing new about this."

On the contrary, much of true performance budgeting is new. It involves a more transparent budget that incorporates real-time data on costs and outcomes and greater accountability for tangible program results.

As they begin to apply the new criteria, OMB examiners will find out that evaluating performance is easier said than done. Like the back-and-forth between the examiner and the agency on budget numbers, the reviews require a dialogue-primarily on performance measures and quality of data. Unfortunately, many programs do not have results-oriented performance measures. Indeed, some programs do not have clear performance goals.

Another hurdle is good old-fashioned politics. Even if OMB can overcome the technical challenges and generate solid cases that certain programs are not working, Congress and the administration can disregard performance information. Lawmakers and policy-makers can continue to fund politically popular programs, even if they are poor performers.

Despite the technical and cultural obstacles to performance budgeting, such tools as the program assessment ratings system and the kinds of questions it raises will provide better information for making decisions-both at OMB and on Capitol Hill. Performance budgeting won't eliminate the political forces at work in the budget process, but in time it will hold agencies accountable for the taxpayer's dollar.


carld@rppi.org

NEXT STORY: When Employees Take the Fifth