The General Accountability Office reconfirmed a pervasive management problem in its new report in the Managing for Results series. The title defines the problem: “Governmentwide Actions Needed to Improve Agencies' Use of Performance Information in Decision Making.” This is not a new theme for GAO but apparently their past recommendations have had minimal impact. Despite what seems to be continuous attention, the use of metrics has declined, not improved, over the past decade.
As recently as 2016, a column on the Federal News Radio website focused on efforts by the Performance Improvement Council “to bring together tools and people to take better advantage of technology and data” and to “start to think about putting real-time performance data in the hands of federal managers in more of a more customizable and user-friendly ways?” Creating “opportunities to use performance data, especially for program managers” was described as “really the sweet spot.”
Those quotes are damning. Metrics have been required since 1993 when the Government Performance and Results Act required agencies to develop performance plans with measurable goals. In 1997, John Koskinen, then the Deputy Director for Management in the Office of Management and Budget, testified that experience under GPRA “confirmed that virtually every activity done by government can be measured in some manner.” But agencies are still working “to bring together tools and people” and only now starting “to think about putting real-time performance data in the hands of federal managers.” And the worst is that agencies are still working to create “opportunities to use performance data.” The column’s headline referred to the use of metrics as “a compliance exercise.”
For anyone who has worked in business that is eye-opening. In successful companies, metrics have for decades been the basis for decision making. In manufacturing and sales, new data are reported daily. Financial data have been central to business management for centuries. In the 1980s quality metrics opened the door to the empowerment of frontline workers.
The quality movement exemplified the use of metrics at its best. As the work day unfolded, new data was posted and decisions made by frontline workers to improve results. Quality for a time was a religion. Frontline workers were responsible for results. That prompted articles like, “Listen to Your Frontline Employees.” Later there were articles like “How to Engage the Front Line in Process Improvement.” In healthcare there were articles like, “Redesigning the Care Team: The Critical Role of Frontline Workers and Models for Success.”
Government is like a football game. Games are won or lost on the field by frontline workers. But regardless of how well they play, there are spectators who feel free to criticize. There are also observers high up in a booth, away from the field, who compile metrics. Coaches spend time game planning but once a game begins the best make adjustments and provide feedback to help players improve.
It’s Not the Numbers
Metrics document the past and are central to improving performance. That involves far more than compliance—change is essential for improved results. The numbers have little value unless they influence work day decisions. Too often performance data are generated, reported to agency leaders, included in budget proposals or posted on scoreboards but then not used in managing performance. This has been a documented problem at all levels of government.
Another football analogy is the importance of the run-pass-option play in the success of my Philadelphia Eagles. The quarterback has discretion in running the play. That works when there is mutual trust. Managers need similar discretion and the trust to make day to day operational decisions.
Two core issues stand out in comparing the GAO methodology with the survey used in the Census Bureau’s “Management in America” study. Both ask how metrics are used and rely on responses by managers.
The most important difference is the purpose of the Management in America study—to understand the impact of management practices, including the use of metrics, on an organization’s performance. In the Census survey, the phrase “key performance indicators” is used in several questions but that phrase is not used by GAO. The research highlighted the practices that are linked to better results. The same practices have proven to be important in global studies.
A second difference in the comparison is reflected in the report’s references to who uses the data. The GAO report repeatedly refers to senior leaders or agency leaders (as in “the use of performance information by senior leaders”). That’s a common thread in columns or reports on metrics and evidence-based decision making in government.
In contrast, the Census study includes several questions asking about the involvement of non-managers—the playmakers: their role in using data, rewarding their performance with bonuses, and the impact of performance on their careers (i.e., promotions, reassignment and dismissal).
The Census questions also look at how frequently key indicators are reviewed by managers and non-managers. The survey options include monthly, weekly, daily and hourly in addition to annual and quarterly. At lower levels weekly meetings are common.
Agencies should look to the city of Baltimore as a model. The mayor created an office, CitiStat, in 1999 to track and manage performance. Support for the office has waned at times but today “there are regular performance management sessions between the Mayor’s Office, the CitiStatSMART team and department leaders using data analysis to discuss performance, identify problems, diagnose causes and direct resources to solve problems.”
Key people from the Bureau of the Budget and Management Research, Human Resources and Information technology attend the meetings. Apparently, Baltimore has knocked down the silos.
Corporate leaders hold similar meetings, typically monthly, to discuss developments and modify plans to improve performance. Business executives know they need to have answers to resolve problems. They have significant rewards riding on achieving or exceeding planned results.
Those meetings are led by chief executive officers. The intensity of the meetings has been discussed in business media reports. The individuals who reach the executive ranks know what to expect. Investors demand improved results. Similar discussions occur at every level.
The Barrier: Fixed Mindsets
When people work in a static environment for an extended period, they develop behavioral habits and a disposition to interpret and respond to new ideas or suggestions as unnecessary or unwarranted. When the work group is stable, their attitudes and assumptions are shared by co-workers. The common beliefs and attitudes are effectively the culture.
Large organizations do not have a uniform culture, but if there is one trait in common across government it’s at least passive resistance to change. For successful change, thousands of employees will need to accept the argument for change. For that they will need to understand how planned changes will better serve their agency’s mission along with how it will affect their careers. Revised goals and behavioral changes should be reinforced with financial and nonfinancial rewards. That would be true in the private sector. New skills may be needed as well.
Important also, according to Harvard’s Robert Behn, is to plan on incremental, “doable” goals. Then those responsible can celebrate small wins that build commitment for raising the bar.
The track record shows clearly that increased investments in technology or in developing new metrics is not the answer. That‘s been tried for two decades. It’s also clear that the solution is not additional legislation. The essential changes come under the reform umbrella but the real problem is the culture and that is not directly related to the changes proposed in the president’s management agenda.