The Results Game

Management tools track performance, but do they help improve it?

"Managing for results" is one of those ubiquitous phrases in federal government, bounced around with other catchphrases like "sharing best practices" and "breaking down silos."

Many administrations have taken a crack at defining the results agencies should reach for and how their progress should be measured. Through the years, agencies have scrambled to hit targets laid out during the Clinton era under the National Partnership for Reinventing Government and the 1993 Government Performance and Results Act, George W. Bush's Program Assessment Rating Tool and, most recently, President Obama's high-priority performance goals. Each of these initiatives was branded as a management tool for agencies to improve performance and ensure taxpayers receive their money's worth from federal programs.

But what exactly does it mean to manage for results? Can any one management strategy really help agency and program leaders achieve goals as diverse as upgrading a technology system and boosting staff morale?

In June, Shelley Metzenbaum, the Office of Management and Budget's associate director for performance and personnel management, issued a memo on performance improvement that, among other things, offered managers some concrete suggestions. The Obama administration expects senior leaders to conduct "goal-focused, data-driven reviews" at their agencies at least once every quarter to track progress against their high-priority performance goals.

Metzenbaum laid out guiding principles for these meetings, noting they should be driven by performance data and related information such as cost, skills assessments, employee feedback, and the capability of program partners inside and outside government.

"They should focus on progress toward desired outcomes, explore the reasons why variations between performance targets and actual outcomes occurred, and prompt quick adjustments to agency strategies and action when needed," she wrote.

The memo says managers are accountable for a laundry list of performance-related tasks: setting outcome-focused goals and measuring progress, tracking completion of milestones, comparing progress among peers to identify better practices, looking for factors that government can influence and that affect trends, implementing strategies based on performance and other relevant data, making quick adjustments to strategies when they are not working, and reporting progress to the public in useful and accessible ways.

Just reading the list seems exhausting. Underlying Metzenbaum's guidance is a belief that knowledge is power. The premise is managers who have the data necessary to know where they are succeeding and where they are lagging can apply effective management techniques and fix problem areas. But is it that simple?

Administration leaders have little choice but to frame guidance in a way that makes it applicable to hundreds, if not thousands, of managers tackling varied challenges. But perhaps it is time to admit there is no one formula for managing for results. There is no checklist of actions a manager can take to ensure his or her program succeeds.

Promoting one executive's success might indeed help others facing similar challenges, but pretending a variety of missions can be addressed with a single solution is a disservice to managers across the board.

This column will, at least on occasion, highlight managers who have achieved specific results. There are lessons to be learned from their successes; sharing best practices can be more than a platitude. But when it comes to managing for results, agency leaders need access to every available tool as well as the reins to choose, a la carte, those that best suit their needs and their programs.

Elizabeth Newell covered management, human resources and contracting at Government Executive for three years.