How We Grade
To further bolster our grading consistency, we attempt to collect and evaluate the same kind of information about each agency. Each is asked to complete a written survey covering all the criteria. The surveys are an important source of data, as well as a way for each agency to tell its own story. Of this year's seven agencies, all but one, the Bureau of Consular Affairs, completed the survey. This year, for the first time, we also separately surveyed 100 managers at each agency who have hiring, firing and supervisory authority and significant responsibility for core functions. On average, 60 percent of them returned completed surveys, each of which included 77 questions.
In addition to the surveys, we analyze agency documents, such as strategic and performance plans and reports, studies performed by outside reviewers such as the General Accounting Office and the National Academy of Public Administration, and the agency's materials supporting budget requests, investment decisions, staffing levels and the like.
Further, Government Executive writers conduct a full-bore reporting effort for each agency. In addition to reviewing mountains of documents, they conduct 60 interviews, on average, for each agency story, gathering the views of GAO auditors; budget reviewers; congressional oversight and appropriations staff; agency clients, customers and others affected by agency programs; think tanks; unions and professional organizations representing the bulk of agency employees; management consultants and vendors playing significant roles in the agency; executives and managers running the agency's core programs; line managers and employees; and agency officials with the most significant responsibility in the project issue areas.
The George Washington University project staff sort and study the information collected from all these sources using the framework of the criteria. In consultation with Government Executive's writers and project editor, the GW team assigns preliminary grades in each issue area as well as an overall grade for each agency. Fifty percent of each agency's overall grade is based on its ability to manage for results. We rated the other areas of management based on their contribution to results-based management. We chose this emphasis to reflect the growing importance of performance-based management in government and to more tightly focus our evaluation.
We also applied to the grading the experience and insights we've gained rating the management of 20 other agencies for our previous two reports. In addition, for the first time this year, we asked a team of expert advisers to react to and assess the preliminary grades. Taking their suggestions into account, we reviewed our ratings and issued the final grades reported here.
It's important to note that in our grading scheme, an A doesn't indicate perfection and a D doesn't mean virtual failure. The grades are better viewed as an indication of the company an agency keeps, that is, whether its management capabilities place it in the top, middle or lower tier of agencies we've graded thus far. Because we are so acutely aware of the limitations of grading, the project produces more than grades. The feature stories that accompany each agency's grades carry equal weight in our assessment. These stories are not intended to explain the grades, but instead to place each agency's management performance within the context of the political, economic, social, demographic and historical challenges it faces.
The stories augment the grades by examining the results agencies currently are achieving, the politics they must negotiate, and the many countervailing forces they must overcome in order to accomplish their missions. The stories offer fuller examinations of each agency's management strengths and weaknesses; the key hurdles to good management it must overcome and how it is grappling with them; how overseers and clients view its performance and its management; what the agency is doing to deal with its weaknesses and how well it is doing; what problems loom; how well prepared the agency is to achieve results in the future; and how it is altering or improving management to produce better results.
We believe this combination of grades and stories offers the most fair and comprehensive analysis of federal agency management ever undertaken. But in the end, this report can be nothing more than a very good faith effort to capture a moment in the long, ever-changing stories of the agencies we're rating. By the time you read it, conditions and the agencies themselves already will have changed.
2001 Expert Advisory PanelKenneth ApfelG. Edward DeSeveRenato A. DiPentimaMortimer L. DowneySteven KelmanSusan Robinson KingC. Morgan Kinghorn