Fleckstone/Shutterstock.com

So Long to the Cubit and Ad Hoc Performance Metrics

Cross-agency benchmarks support data-driven decision-making.

A “cubit” is an ancient measure of length – from your elbow to your middle fingertip. We no longer use it, because everyone’s is different, and we get different results. The federal government has a project underway to move from its version of cubits to a more standardized set of performance measures.

The cross-agency initiative Benchmark and Improve Mission-Support Operations has been underway since early 2013. Results are being used to inform discussions between the Office of Management and Budget and agencies in their first-ever “FedStat” meetings on how well they are managing their administrative functions and delivering on strategic objectives. The project manager, Steve Brockelman of the General Services Administration, says: “We now have a rich set of governmentwide, cross-functional benchmarks to support data-driven decision-making.”

For at least two decades, there have been ad hoc efforts to benchmark federal agency performance in areas as diverse as call center efficiency, customer satisfaction and employee satisfaction. But more recently, there has been interest in benchmarking the cost and quality of services across key mission-support activities, such as human resources, real estate management, contracting and information technology. Supported by their respective cross-agency councils, such as the Chief Human Capital Officers Council, the efforts have been spurred in part by a broader interest in shared services across agencies.

OMB’s then-deputy director for management, Beth Cobert, and then-GSA administrator Dan Tangherlini were the initial champions of the benchmarking initiative. They have been followed by two other strong champions — Denise Turner Roth at GSA and Dave Mader at OMB — which provides continuity. In addition, there has been strong support from the President’s Management Council, composed of the chief operating officers from the Cabinet departments and major agencies.

The project is actually “owned” by the cross-agency councils. Brockelman became the point person because he runs GSA’s Office of Executive Councils, which provides staff support for most of the cross-agency mission-support councils. He conducted similar efforts when he was in the private sector.

Brockelman notes that the key to success in any benchmarking initiative is to create consistent, standardized, and agreed upon data elements, with clear definitions and frame of reference (e.g., time, place and process).

The Process

The five cross-agency councils created working groups that “took the lead in developing and selecting metrics that would help them improve cost-effectiveness and service levels within their functions,” Brockelman says. These include the Chief Financial Officers CouncilChief Information Officers Council, Chief Acquisition Officers Council, Chief Human Capital Officers Council and Federal Real Property Council.

They created a “common language for measuring performance of agency mission-support functions,” Brockelman adds. The Office of Executive Councils served as a neutral convener to promote collaboration and problem-solving among agencies and OMB.

The councils agreed on three guiding principles for their work:

  1. Imperfect data is better than no data. Data can be enhanced over time, especially if it is seen as useful by agency decision-makers.
  2. Create action-oriented metrics that can answer questions such as: “How efficiently is my function providing services compared to my agency peers?”
  3. The resulting data needs to be seen as a resource to be shared across agencies so the President’s Management Council and agency management teams can better understand the cost and quality of their administrative functions. The cross-agency councils will serve as clearinghouses for identifying and sharing best practices, and individual agencies can use the results to diagnose issues and prioritize areas ripe for improvement.

The initiative completed its first round of data collection in 2014. At that point, they collected about 40 metrics — largely around cost and efficiency — across the five mission-support functions. Round two was completed in the first half of 2015, adding operational quality and customer satisfaction metrics — about 26. To get customer satisfaction data, the councils jointly sponsored a survey of 139,000 managers.

The Results

Initial results, Brockelman says, show “the amount of variation in the cost and quality of commodity services across the government is enormous.” When data from one department is compared across bureaus or components, it sparks a discussion among the leadership team: Are there legitimate reasons why performance varies so much from the norm?

This new benchmarking data allows the chiefs of various mission-support functions to explore answers to fundamental management questions:

  • What is the area with the greatest need for improvement?
  • What are the trade-offs in shifting resources from one area to another?
  • Which shared-service providers would deliver greater savings and quality?
  • Which services are internal customers dissatisfied with, and why?

The FedStat Meetings

Collecting and reporting data is one thing. Using it is another. This year, for the first time, OMB created a forum to assess mission-support issues across functions. The high-level “FedStat” reviews hosted by each agency will be co-chaired by the deputy secretary and OMB’s deputy director for management. Pre-meetings are held jointly between OMB and agency staffs; the goal is to avoid any surprises going into the reviews.

The meetings held this spring and summer focused on real challenges, and the broader context allowed a more nuanced discussion. The specific actions discussed will be incorporated into the president’s fiscal 2017 budget.

Data from each agency is shared across (but not outside of) government to encourage honest dialog and develop better quality data. Brockelman, who sits in these FedStat meetings, says about 40 percent of the agenda is centered on the benchmarking data produced by the initiative. Questions raised include:

  • What are the three biggest challenges revealed by the data?
  • What are their root causes?
  • Where do management teams want help from OMB, GSA, the Office of Personnel Management, or other agencies?
  • Does it make sense to move some of these activities to a shared-service environment?

Next Steps

Brockelman says the benchmarking initiative is relaunching its website in the coming weeks to allow agencies to view, compare and analyze the new data. Cross-agency council meetings this fall will focus on the governmentwide implications of the benchmarking results, the key performance drivers, shared challenges and leading practices. OMB has already begun a series of conversations with various stakeholders to uncover lessons learned in order to improve the 2016 process. In this endeavor, fine-tuning is a perpetual process — but it is better than using cubits.

(Image via Fleckstone/Shutterstock.com)