Washington breathed a collective sigh of relief when a government shutdown was averted on Sept. 30. But that news overshadowed the quiet release of a Government Accountability Office report on the government’s progress on using performance information to make better decisions.
GAO is mandated by law to track the progress of agencies’ implementation of the 2010 Government Performance and Results Modernization Act. Its summary highlights mixed progress, but the report’s details show a great deal of progress. The report covers a range of issues, summing up a series of related reports over the course of the past year. But the core issue is: Are agencies using performance data to make decisions?
The track record hasn’t been promising. GAO notes that surveys of federal managers it conducted between 2003 and 2013 found that a majority of them were not using performance information and that “agencies continue to struggle to do so.” But that was before corrective provisions in the new law kicked in. While there is no new survey data, the actions by agencies are hopeful.
What are federal agencies doing? The 2010 update to the Government Performance and Results Act mandates a series of administrative processes, ostensibly to improve agency performance. More specifically, the 2010 law requires two sets of progress reviews by senior agency officials: one is quarterly reviews of targeted agency-designated priority goals (about 100 across the 23 largest agencies), and the other is broad annual reviews of each agency’s progress toward the objectives outlined in their strategic plans (about 350 across these same agencies).
Agencies have been conducting quarterly reviews for several years, but 2014 marked the first round of progress reviews under the strategic plans developed for President Obama’s second term. GAO examined agency efforts for both the quarterly reviews and the annual strategic reviews. A key question asked: Were the reviews of value?
Agency quarterly data-driven reviews. GAO’s assessment of quarterly reviews of priority goals surveyed leaders in the 23 largest agencies and found that “most agencies reported their reviews have had positive effects on progress toward agency goals, collaboration between agency officials, the ability to hold officials accountable for progress, and efforts to improve the efficiency of operations.”
Most agencies followed OMB’s implementation guidance and best practices. Nearly all agencies “reported that their data-driven reviews have had a positive effect on collaboration between officials from different offices or programs.”
For example, Carolyn Colvin, acting commissioner of the Social Security Administration “personally presided over bimonthly review meetings and created a new office (the Office of the Chief Strategic Officer) to support expanded performance management and data analysis efforts,” the report said.
This expanded analytic capability, along with the new review process, had a positive result. One of SSA’s priority goals is to increase registrations for the my Social Security portal by 15 percent per year in 2014 and 2015. Early quarterly reviews, however, made it apparent to SSA leaders that the “agency was not on track to achieve its target for this goal.”
The analysts diagnosed the reasons and found that the approach under development had so many additional technology features that implementation was bogged down. When presented this analysis at one of its quarterly review meetings, the SSA leadership team agreed to shift focus to what could be done with existing resources and technology. “To achieve this, SSA leadership had different offices within the agency . . . specify the contributions they would make to help increase the number of registrations,” GAO said. While SSA was unable to meet its 2014 goal, by shifting its strategy, it had increased new registrations by 46 percent by October 2014 compared with the prior year, with similar increases in subsequent quarterly reviews.
Agency annual strategic reviews. In a separate report, GAO examined practices six agencies used in 2014 to conduct their first strategic reviews. It identified seven promising practices that make for effective strategic reviews. These practices include: establishing a process for conducting strategic reviews, using the reviews to assess the strategies and other factors that influence the outcomes, and monitoring progress on needed actions.
One of the strategic objectives at the Agriculture Department, for example, is to increase food security and reduce hunger among children by providing low-income people access to healthy food and nutrition education. It created a logic model to explain how its multiple food programs interact, and it uses the linkages among components in that logic model to ensure “decision-makers can have more focused and meaningful discussion for how proposed strategies are tied to desired results and how to measure the success of strategy execution and impact.”
FedStat reviews. In parallel to the GAO assessment of the effectiveness of the 2014 strategic review process, OMB and agencies conducted a self-assessment of how well the 2014 strategic annual reviews were conducted and tweaked them for 2015 by combining the strategic reviews along with other OMB-required reviews of mission support functions.
This new agency-based forum – dubbed “FedStat” – replaced the lower-level strategic reviews conducted in 2014 and was expanded to assess mission-support issues across functions and in the context of mission-oriented strategic objectives. These meetings are hosted by each agency and co-chaired by OMB’s deputy director for management and the deputy secretary (or chief operating officer) for that agency. Pre-meetings are held jointly between OMB and agency staffs; the goal is to not have any surprises going into the meetings. The meetings focus on real challenges and the broader context allows a more nuanced discussion.
According to participants, these meetings were conducted over the course of late spring and the summer. Specific actions are summed up by OMB staff and, where appropriate, will be incorporated into the president’s fiscal 2017 budget to be released in early 2016.
The revised review forum allowed a conversation within most agencies that had not been held before because previously there was no one raising the questions, analyzing the data, and presenting it in a forum where top leaders could discuss these topics in a way that assessed the interaction between mission-delivery and mission-support elements within an agency.
What’s next? It will take constant tinkering to make the processes work in such a way that participants see value in it – and not treat it just as a compliance exercise. But having leaders that care is equally important. According to GAO, almost half the agencies reported that “sustaining a data-driven review process over time and across leadership transitions can be a challenge.”
I still hear complaints that these reviews are a compliance exercise. The same was said of the Clinton and George W. Bush performance initiatives. Will the next president take a different tack? Donald Moynihan and Alexander Kroll, in a recent journal article warn: “A new president may be tempted to look for another approach or to simply deprioritize the Modernization Act. This would be a mistake.”