How We Grade
To further bolster our grading consistency, we attempt to collect and evaluate the same kind of information about each agency. This year, we stopped conducting written surveys of each agency under examination, opting instead to interview agency officials with responsibilities in our five areas of study using specially crafted sets of questions tied to the project criteria. The interviews are an important source of data, as well as a way for leaders to tell their agency's story. We also separately surveyed managers at each agency who have hiring, firing and supervisory authority and significant responsibility for core functions. The Immigration and Naturalization Service was the only one of this year's six agencies to refuse to provide contact information for its managers so we could survey them.
In addition to the interviews and surveys, we analyze agency documents, such as strategic and performance plans and reports, studies performed by outside reviewers such as the General Accounting Office and the National Academy of Public Administration, and the agency's materials supporting budget requests, investment decisions, staffing levels and the like.
Further, Government Executive writers conduct a full-bore reporting effort for each agency. In addition to reviewing documents, they conduct a wide range of interviews for each agency story, gathering the views of GAO auditors; budget reviewers; congressional oversight and appropriations staff; agency clients, advocacy groups, customers and others affected by agency programs; think tanks; unions and professional organizations representing the bulk of agency employees; management consultants and vendors playing significant roles in the agency; executives and managers running the agency's core programs; line managers and employees; as well as the agency officials with the most significant responsibility in our areas of study.
The George Washington University project staff sorts and studies the information collected from all these sources using the framework of the criteria. In consultation with Government Executive's writers and project editor, the GW team assigns preliminary grades in each issue area as well as an overall grade for each agency. Beginning in 2001, 50 percent of each agency's overall grade has been based on its ability to manage for results. We rate the other areas of management based on their contribution to results-based management. We chose this emphasis to reflect the growing importance of performance-based management and to more tightly focus our evaluation. As we grade, we also apply the experience and insights we've gained rating management at 27 agencies for our previous three reports. In addition, we ask a team of expert advisers to react to and assess the preliminary grades. Taking their suggestions into account, we review our ratings and issue the final grades reported each year.
It's important to note that in our grading scheme, an A doesn't indicate perfection and a D doesn't necessarily mean near-failure. The grades are better viewed as an indication of the company an agency keeps, that is, whether its management capabilities place it in the top, middle or lower tier of agencies we've graded thus far. Because we are so acutely aware of the limitations of grading, the project produces more than grades. The feature stories that accompany each agency's grades carry equal weight in our assessment. These stories are not intended to explain the grades, but instead to place each agency's management performance within the context of the political, economic, social, demographic and historical challenges it faces.
The stories augment the grades by examining the results agencies are achieving, the politics they must negotiate, and the many countervailing forces they must overcome in order to accomplish their missions. The stories offer fuller examinations of each agency's management strengths and weaknesses; the key hurdles to good management and how the agency is grappling with them; how overseers and clients view its performance and its management; what the agency is doing to deal with its weaknesses and how well it is doing; what problems loom; how well prepared the agency is to achieve results in the future; and how it is altering or improving management to produce better results.
We believe this combination of grades and stories offers the most fair and comprehensive analysis of federal agency management ever undertaken.
G. Edward DeSeve
J. Christopher Mihm, director of governmentwide management issues for the Strategic Issues Group at the General Accounting Office.
Renato A. DiPentima, president, SRA Consulting and Systems Integration, SRA International; former deputy commissioner of the Social Security Administration.
Mortimer L. Downey, principal consultant with pbConsult Inc. of New York; former deputy secretary and chief operating officer, Transportation Department.
Hannah Sistare, executive director of the National Commission on the Public Service; former minority staff director and counsel for the Senate Governmental Affairs Committee.
A.W. "Pete" Smith, president and chief executive officer of the Private Sector Council.
NEXT STORY: Managing for Results