Promising Practices Promising PracticesPromising Practices
A forum for government's best ideas and most innovative leaders.

After Two Decades, Agencies Finally Are Starting to Make Performance-Based Decisions

ARCHIVES

It’s taken over 20 years, but finally the pieces seem to be falling into place for the use of performance information to inform decision making.

Back in 1993, reformers thought that if agencies developed strategic plans, operating plans, and measures of progress, decision makers would use the resulting information to manage better. That didn’t work. In 2001, the Bush Administration thought that if a scorecard of more discrete performance information at the program level was created, decision makers would use it to manage better. That didn’t work either. In fact, a recent article in Public Administration Review by professors Donald Moynihan and Alexander Kroll concluded: “Performance reform initiatives in the U.S. federal government in the last 20 years fit a general pattern of dashed expectations.” In fact: “the most damning fact is that the reforms failed to meet their own basic goal of making the use of performance data the norm.”

A reform to the reform law in 2010, the Government Performance and Results Modernization Act set out to fix various flaws. Did it work? Early evidence suggests it has, according to Moynihan and Kroll, and this was recently reinforced by a new study released by the Government Accountability Office. But there are some caveats:

The right routines matter. In their article, Moynihan and Kroll reanalyzed GAO survey data from 2014 regarding federal managers’ use of performance information and found reason for optimism: “The Modernization Act put in place a series of routines that established organizational conditions for greater use of performance data.”

They go on to say: “Performance management reforms typically create routines of data creation and dissemination; prior analyses suggest that such routines do little to increase the desired behavior of performance information use.” However, under the Modernization Act, federal managers “report higher performance information use” as a result of the newly-mandated routines of:

  • Goal coordination, especially cross-agency priority goals;
  • Goal clarification, such as agency priority goals; and
  • Data-driven reviews of quarterly progress towards these goals.

The authors note: “A basic problem for public organizations is that they inherently pursue multiple and possibly conflicting goals.” The newly-mandated routine of greater goal clarification establishes that agencies set no more than five priority goals. The authors found that when “organizational goals are clarified, employees are more motivated by and attentive to goals.” They also found that this routine also fosters greater leadership commitment.

The 2010 law also requires quarterly data-driven progress reviews of priority goals. The authors find that: “Well-run data-driven reviews will generate higher performance information use than poorly run reviews.”

In addition to examining routines, Moynihan and Kroll analyze GAO survey data of federal career managers in 2007 and 2012-13. They use pooled regression analyses to examine four types of data use at the level of the individual respondents (GAO focused its analysis at the agency-level): 

  • Performance measurement,
  • Program management,
  • Problem solving, and
  • Employee management.

Their analysis found that “all three routines are significantly related to purposeful data use.” More importantly, they did not find similar results when looking at the pre-Modernization Act survey data.

Cross-agency goals showed progress. The 2010 law also required the Office of Management and Budget to designate a small handful of priority goals that span multiple agencies. This was the first of the three routines that Moynihan and Kroll assessed. A recent GAO report assesses this new routine and commends OMB for its efforts during the first two years of implementation.

GAO reviewed 7 of the 15 Cross-Agency Priority (CAP) goals, noting they are “4-year outcome-oriented goals covering a number of crosscutting mission areas—as well as goals to improve management across the federal government . . . intended to drive progress in important and complex areas, such as improving information technology and customer service interactions.”

It found that OMB and the cross-agency Performance Improvement Council “incorporated lessons learned from the 2012-2014 interim CAP goal period to improve the governance and implementation of these cross-cutting goals.” For example, OMB changed governance structure of each CAP goal to include agency leaders, not just White House officials; held regular senior-level reviews; and provided ongoing assistance to CAP goal teams.

GAO says that OMB and the Council implemented a set of strategies to build agency capacity to work more effectively across agency lines. For example, they:

  • Assigned agency-level co-leaders, instead of putting all the leads in the White House.
  • Provided guidance and assistance to agency teams, such as techniques for data collection, seminars on how to develop metrics, and assistance in how to develop milestones and actionable next steps.
  • Held senior-level reviews.
  • Obtained funding to support activities. For example, the Lab-to-Market team will use $1.9 million to develop an interface for the 17 Department of Energy national labs to interact with external stakeholders.
  • Launched the White House Leadership Development Program, whose participants provide staff support to CAP goal leaders.

GAO concludes that “efforts to build the capacity of the CAP goal teams to implement the goals has resulted in increased leadership attention and improved interagency collaboration for these goals.” Team members reported that being designated as a CAP Goal led to increased leadership attention and collaboration. For example, the Smarter IT Delivery team told GAO that “obtaining the hiring authority [for digital service experts in agencies] was a direct result of the CAP goal.”

GAO also found that the goal teams were better able to leverage expertise across agencies.  For example, the Open Data CAP goal created an interagency working group that meets biweekly with a diverse range of employees and the group’s efforts has resulted in “every agency now knows how to produce machine-readable documents.”

While assessing CAP goal implementation, GAO found that the most problematic element was the development of performance metrics to determine progress. CAP goal teams “are using milestones to track and report on progress quarterly. However, they are still working to improve the collection and reporting of performance information for the CAP goals.” While “The use of milestones is a recognized approach for tracking interim progress toward a goal” it is not sufficient.

What happens next matters. The real test will be with the onset of a new Administration. Will it build on the efforts to date, or will it want to go back to the drawing board? Moynihan and Kroll conclude: “A new president may be tempted to look for another approach or to simply deprioritize the Modernization Act. This would be a mistake . . . We find that the quality of routines—not just the routines themselves—matter.”

John M. Kamensky is a Senior Research Fellow for the IBM Center for the Business of Government. He previously served as deputy director of Vice President Gore's National Partnership for Reinventing Government, a special assistant at the Office of Management and Budget, and as an assistant director at the Government Accountability Office. He is a fellow of the National Academy of Public Administration and received a Masters in Public Affairs from the Lyndon B. Johnson School of Public Affairs at the University of Texas at Austin.

FROM OUR SPONSORS
JOIN THE DISCUSSION
Close [ x ] More from GovExec