The Ratings Whirl

Once, only your boss's opinion mattered at appraisal time. If the 360-degree feedback craze continues, you may find yourself ringed by raters, including your co-workers, subordinates and even your customers.

Despite the pitfalls, experts say no federal organization that has undertaken a multi-source feedback program has shut it down. Most have overcome mistrust on the part of employees and supervisors to find participants genuinely appreciate and use the information the process unearths. "This is the first opportunity in our careers to get this kind of feedback," Witt says of himself and his engineering colleagues. "We find technical problems easy to solve. It's the personal interaction that poses the most formidable challenge and people are less well-equipped for it by our training. There is a need for this kind of feedback."

M

ost of us have fantasized about what we'd say if the tables were turned during our performance evaluation. Boy, would we give the boss an earful about his or her failings. That turnabout may soon become fair play if 360-degree feedback continues making inroads at government agencies. But what's good for the boss is good for the worker in this new evaluation system. If your office adopts a 360-degree program, both you and the boss can expect a thorough examination by your respective supervisors, colleagues, subordinates and maybe even your customers.

The groups from which raters are chosen vary, but all 360-degree programs involve soliciting feedback from more than just a single supervisor. Most have co-workers rate non-managers. Supervisors are rated both by those who report to them and by their peers. In most cases, employees pick their own raters but their choices are vetted by supervisors who try to expand the circle beyond an employee's pals. Raters' participation is voluntary in most programs.

While traditional evaluations measure what an employee does, 360-degree assessment measures how an employee does the job. Typically, raters are asked to assess employee performance in behavioral areas such as communication, teamwork and customer service. For managers, areas such as leadership and coaching are added. All programs encourage raters to write comments and most also have them rank performance in each area on a numeric scale-most commonly 1 to 10, with 1 as the lowest level.

Almost any change in performance ratings would be welcome at most agencies. Disdain for the five-level, supervisor-only system is nearly universal. Supervisors hate crafting appraisals. They feel constrained when ratings and merit bonuses are tied together, because with limited bonus money, managers often feel compelled to ration or rotate high ratings, giving rise to complaints of unfairness and subjectivity. "In the past, appraisals were used to justify the awards system," says Joseph Colantuoni, director of the Education Department's management systems and improvement group. "People didn't get the feedback they needed when 80 percent were rated 'outstanding.' "

Agencies moving toward teamwork have been particularly frustrated by the traditional system's focus on individual performance and individual rewards. Without some form of peer review, it's nearly impossible to evaluate members of self-directed teams, whose supervisors aren't involved in day-to-day activities. Adding more opinions to performance ratings also is appealing to organizations that have thinned supervisory ranks, leaving fewer bosses to observe and rate more staffers. For example, it was a major reorganization eliminating all first-level supervisors that spurred the Energy Department's Idaho Operations Office to begin using 360-degree feedback in 1992.

In addition, 360-degree feedback helps get employees at every level to adhere to common values and follow an agencywide plan. Most organizations base their 360-degree programs on values and goals they want reflected in every office. Both raters and those being rated thus share the same performance expectations, which can be altered as the organization changes to handle new challenges. Where improving service is an agency value-and that's supposed to be everywhere these days-adding internal and external customers to the raters' circle helps focus employees on pleasing them.

Employees also say 360-degree feedback improves communication between them and their supervisors. "The old system was a 'gotcha' system. This is a system for improvement. Employees said they didn't know what was expected and there was not enough feedback [under the old system]," says Marvin Farmer, president of the American Federation of Government Employees National Council of Education Locals 252. On the other hand, supervisors retain a good deal of discretion under Education's new 360 system, Farmer notes. "The first-line supervisor ultimately makes the call" in deciding an employee's final rating.

In 1995, when the Office of Personnel Management rewrote the performance rating rules to allow three- and even two-level systems, it also began to allow agencies to gather feedback beyond that of supervisors. This brought expansion of existing 360-degree feedback programs and creation of new ones in departments as varied as Energy and Education. OPM's new rule emphasized the importance of tailoring performance management procedures to organizational culture and technology. Thus, 360-degree programs across government differ in how they use feedback, whether raters are anonymous and how they gather ratings.

Should It Count?

Some, like the Education Department and Energy's Richland Operations Office in Hanford, Wash., expect supervisors to use 360-degree feedback as an information source in setting employees' annual ratings. Education and Energy's Western Area Power Administration (WAPA) have moved to pass/fail ratings, so 360-degree feedback plays a relatively minor role in ratings, coming into play only if an employee is on the verge of failing. Richland supervisors assign ratings at three levels, but that office, too, is considering moving to a pass/fail system.

"We made a decision the performance appraisal was to focus on employee development," says Steven Frieman, quality manager at WAPA. "We disconnected performance money from performance ratings. The only topic left to talk about was employee development, and 360-degree feedback is the focus."

At the Voice of America's engineering office, feedback is purely developmental. Recipients aren't required to share their feedback with bosses and it plays no role in ratings. "The supervisor does not see it, but we encourage people to share their areas of weakness with their staff so they can improve," says Jim Witt, director of total quality in the VOA office of engineering.

The U.S. Postal Service 360 program began in 1992 with about 500 top executives. Though the reports were only supposed to be used for individual development, not for decisions about compensation and promotions, the executives' bosses received copies. USPS has now decided that giving bosses the reports was a mistake. "It was not developmental," says Cory Edwards, USPS corporate training and development specialist. "It was kind of like when you're in front of a jury and the judge says, 'Disregard that information.' "

Now postal executives' ratings go only to them. They are expected to attend a group meeting to discuss their 360-degree feedback in general terms, meet one-on-one with a consultant from the firm whose 360-degree program USPS is using, and use the feedback to build an individual development plan supervisors will use to set their annual ratings. "This time around it's strictly and totally developmental," Edwards says.

Anonymous or Not

Most people would recoil at the thought of rating co-workers or the boss without remaining anonymous and most 360-degree programs strive to conceal raters' identities. "Generally, it is advised that the identities of the raters be kept confidential to assure honest feedback," according to "360-Degree Assessment: An Overview" an April 1996 report in OPM's Performance Practitioner Series. "In close-knit teams that have matured to a point where open communication is part of the culture, the developmental potential of the feedback is enhanced when the evaluator is identified and can perform a coaching or continuing feedback role," the report continues. That's pretty much how they see it at WAPA, where raters must sign their comments.

"When it's anonymous, there's this whole feeling of 'What does this mean?' " says Frieman. "I could go to the person and speak to them about what they wrote. That's the advantage over other 360-degree systems. When you write comments and you know the person is going to know who wrote it, you're going to think twice about how you phrase things." At the Idaho Operations Office, ratings are anonymous, but many raters apparently aren't concerned about protecting their identities. "Actually, a good percentage of our employees add their initials to the comments in case the ratee has a question," says Russell Bennion, Idaho's 360 project manager.

Even agencies that protect raters' identities can't guarantee anonymity, especially when employees add their own comments. Most 360 programs provide employees written reports showing their numeric rating on each behavioral item and grouping all comments about that item below it. The comments are commingled so employees can't tell which group of raters they are from. But people still fear their writing style or spelling errors will give them away. "You've got to train people how to give feedback and what to look for. I tell them when they give remarks, they can leave fingerprints there," says Education's Colantuoni.

Anonymity takes on greater importance when 360-degree feedback is used to develop performance ratings. If raters' names appear on their questionnaires and the recipient disagrees with the performance rating, the recipient can file grievances against individual raters. At Energy's Albuquerque, N.M., Operations Office, some employees filed Privacy Act requests to discover the source of certain ratings. "Fortunately, the system was set up in such a way that the name was stripped off when the questionnaire went to the system administrator, so they didn't have the information," says Frank DiConstanzo, Energy's director of compensation and performance management.

Advocates of 360-degree feedback must walk a fine line between cautioning people about protecting their identities and dissuading them from writing comments, which often are the most useful part of the exercise. "The comments are the most valuable part. If I get a 1, it's not valuable without comment. I don't know what to focus on to improve," says VOA's Jim Witt.

Neither Kicking nor Kissing

Precisely because they are personal, written comments can be both useful and damaging.

"There's trepidation when you're opening the envelope. My eyes were opened in a few areas. Some of the comments added areas I didn't have a clue about," Witt recalls of his first 360 rating. Frieman had to think about his for a couple days. "No one likes to get the news about how to improve."

Before beginning the 360 process, most organizations train employees in how to give and receive ratings. "We're trying to drive people toward the understanding that feedback is given neither to kiss ass nor to kick ass," says DiCostanzo. Witt tells VOA raters to "point out the behavior, its impact and what the person should do to correct it. Point out behavior you've actually observed." Last year, Energy began a 360 program for all senior executives. "Don't take it personally," DiCostanzo tells them. "[Your raters] care enough about you to have given the feedback and expect you're going to do something with it."

Some organizations have begun linking training and development opportunities to the 360 process. At Energy's Idaho Operations Office, for example, training is linked directly to the 360 feedback form. "We have a Leadership Development Program which is directly linked to the 360 questions," Bennion says. "When the employee and supervisor sit down for the mid-year review and the 360 feedback indicates a problem area, the employee can access the 360 questions on our intranet, click on the questions and find many training opportunities associated with that question."

Once raters are comfortable assessing performance and writing comments, familiarity can breed contempt, says Theresa Hammer, who administers Richland's 4-year-old program. "At first they were timid. Now [that] they've been doing this a long time and they see they can be anonymous, we've been getting verbal digs, inappropriate comments," Hammer says. "We went out and did more training and explained why nonconstructive comments were bad. You could have five or six people with glowing comments and one who says the person shouldn't work for the government and has no ethics. They will focus on it and wonder who did it and why. [Comments like] 'This person has the writing skills of a third grader' [do] nothing but anger people." If an employee received threats or racial slurs on a 360 report, the equal employment opportunity office would get involved, Hammer says, but none of that has been reported.

Pencils or Software

Employees might need special training just to learn how to fill out 360-degree feedback questionnaires. Many agencies are using off-the-shelf 360 computer software including computerized feedback forms. Such a program added a challenge for Richland employees, many of whom were unfamiliar with computers, Hammer says. On the other hand, the computerized forms make data crunching easy. Using paper forms might cost less, but if an average 360-degree circle contains 10 people, that's potentially 10 forms per employee that must be scanned or entered into a calculating program in order to generate a report. Software has the added advantage of stripping rater identifiers from feedback forms as soon as they are sent electronically to the central administration point, thereby making it all but impossible to trace the source. Almost all 360 products come with numerous generic rating areas and items from which agencies can choose to tailor feedback forms to their needs.

Tailoring questions is tricky but important, as the Postal Service learned from its first experience with 360 feedback. USPS' first pencil-and-paper forms contained 240 questions and took too long to complete. The new forms are 130 questions long. Even though a vendor processes the forms and provides counseling to recipients, the Postal Service is considering trimming more questions and moving to a software system, Edwards says. Most agencies use 20 to 30 questions, having found raters become jaded and comments disappear if forms are too long. VOA's engineering shop learned to seek feedback only on those behavioral attributes that presented the biggest problems in the office, Witt says.

Agencies also have discovered that even with fewer questions, filling out feedback forms can become a huge burden on employees. USPS employees are expected to hand out forms to 14 peers and direct reports. Most agencies expect people to have at least five or six people in their feedback circles. That means one person can end up rating many colleagues. Colantuoni wound up with 24 forms this year, Witt with 10. Both say that's too many.

Lessons Learned

Those experienced with 360-degree feedback offer a number of cautions to organizations considering adopting it.

n Do not attempt a 360 program without enthusiastic support from top officials, they warn. At USPS, Postmaster General Marvin Runyon himself brought in the program and it still received a tepid welcome. At the Environmental Protection Agency, lack of enthusiasm at the top has stalled an effort to bring in a 360 program.

  • Organizations undergoing restructuring, reorganization or downsizing should not undertake the massive cultural change implied by 360-degree feedback. Doing so risks "fueling an already tense situation with distrust and paranoia," according to OPM. The Postal Service stumbled by introducing its 360 program amid a major restructuring, Edwards says. "Our timing was off. When you're reorganizing, restructuring or having a RIF, there's enough anxiety. If you introduce any new process, it's going to be viewed with suspicion."
  • Plan for paranoia, even if you're not involved in confidence-shaking change, says VOA's Witt. "It's a new concept. You'll be rating your boss, your peers. Confidentiality is a concern."
  • All agree lots of planning, communication and advance training is best. For example, you'll need to plan how to handle those who refuse to participate. Many programs allow employees about to retire to opt out of being rated.
  • You'll also need to get those who do participate to commit to using the feedback, lest the program be seen as a waste of everyone's time. The Postal Service has built into its newest rating form questions about how those rated used their first feedback results for self-development.
  • Organizations also should be careful about customer feedback. The views of internal customers are relatively easy to incorporate, since they understand the organization and employees' jobs. But experts warn against asking external customers to rate employees' performance. Customers are better at rating services than working relationships and their feedback should be used to evaluate teams or entire agencies, except, perhaps, in the case of front-line employees.

NEXT STORY: Another Federal Code to Crack