High-Tech Hurdles

nferris@govexec.com

E

ven if there were no Government Performance and Results Act, government executives, like many of their counterparts in the private sector, still would be developing information technology performance measures. In corporations as well as federal agencies, IT costs have mushroomed and development projects have missed their budget and schedule targets.

Meanwhile, corporate executives have embraced the notion of tallying return on investment. Like Congress and the White House, corporate policy-makers are demanding that IT demonstrate its value to the organization and its mission. As a consequence, IT executives are trying to embed performance measurement within their operating procedures and management strategies.

It's a tough assignment. On both the corporate side and the federal side, there are few role models. When it comes to measuring IT performance from a broad, strategic perspective, failures probably outnumber successes to date--although no one knows for sure. Performance measurement gets lip service as a best practice in IT, but it's difficult to find organizations that do it routinely and find it useful.

Nonetheless, federal IT managers don't have a choice about performance measurement. They are bound to follow the 1996 Clinger-Cohen Act, which has more stringent requirements for IT performance assessment and reporting than the 1993 Results Act. Clinger-Cohen requires agencies to:

  • Revise mission-related processes before making significant IT investments.
  • Select, manage and evaluate the results of IT investments.
  • Link IT performance measures to agency programs.
  • Report to the Office of Management and Budget annually on their progress.

But agencies are discovering that all the laws in the U.S. Code can't force performance measurement to take root in their organizations. Although Clinger-Cohen has been in effect more than two years, implementation is inconsistent. Parts of the law are making a difference throughout agencies, and other provisions--especially those relating to performance measurement--are having less effect. Some agencies have made strides toward treating the mandate as more than just a paperwork exercise, but it's hard to find one that is fully carrying out the law.

Climbing the Y2K Mountain

Asked why agencies have been slow to begin measuring IT performance, many observers cite the enormous year 2000 problem. Available employees and dollars are being used to make sure federal computer systems keep operating after Dec. 31, 1999. Until those battles are won, agencies aren't installing many new information systems nor changing the way they do business. And embracing performance measurement does require change, the experts say. "It's a big cultural change," says Michael Yeomans, a Pentagon IT management specialist. "People are not used to managing this way."

Research by the Government Performance Project supports Yeomans' statement. For example, agencies often had difficulty identifying how managers benefit from having computer systems in place--even though these systems often have been labeled "management information systems." For the most part, agency officials express confidence that their systems are useful to managers, but they do not provide specifics to support their assertions.

As required by Clinger-Cohen, agencies are giving their chief information
officers management responsibility for most information systems. The CIOs are beginning to provide the central IT coordination and planning that often has been lacking, but progress has been slow. Not only the year 2000 problem, but also turf battles and lack of top management support have hampered CIOs in some agencies.

There are bright spots in the big picture, however. After years of jettisoning their employee training programs, agencies seem to be willing to spend money on training once again. This change reflects the need to hold onto technically skilled workers and add to their ranks, as well as the recognition that training can boost the agency's return on its IT investments.

Another area where agencies have made strides is in their use of IT systems to make information available to the public. The World Wide Web is one major enabler of this change; federal agencies were in the vanguard when the Web became widely available in the early 1990s. But agencies also are using electronic mailing lists, electronic commerce systems, document distribution systems and other new technologies to communicate with their business partners, constituencies and the general public.

As systems become more accessible to the public and more widely distributed in their internal architectures, security vulnerabilities are ever more troubling. But agencies seem to be paying more attention to this long-neglected area, and there are indications that the deficiencies in security technology will be cured by emerging technologies such as digital signature tools. With security, as with so many other aspects of IT, assessments must be subjective and qualitative, because there are no good ways to measure the security of an agency's systems.

Even where agencies have committed wholeheartedly to measuring the performance of their IT programs, they are finding it takes time. It requires changes in accounting procedures, software, personnel management, procurement, reporting mechanisms, budgets and more. Agencies often are making progress in one or more of these areas, but they have not yet connected all the dots.

In most cases, the information that agencies need for an accurate snapshot of their current situation simply isn't available. Virtually all performance measurement method- ologies begin with a baseline of "as-is" data describing today's circumstances. As agency executives found to their dismay when tackling the year 2000 crisis, it's hard even to find out what information systems are operating in large agencies.

Last August, more than a year after being directed to inventory their mission-critical information systems and just six weeks before the administration's deadline to complete
initial Y2K renovations of these systems, agencies still were refining their lists of mission-critical systems. "Senior federal managers continue to re-evaluate which systems are truly critical to their organizations' missions and reset their priorities accordingly," OMB reported. Between May and December, agencies' lists of mission-critical systems dropped from 7,336 to 6,696.

Elusive Costs

If agencies aren't even sure what information systems they are operating, they know even less about how much they spent to acquire those systems and what they are spending now to operate and maintain them. With today's emphasis on return on investment, they're likely to have better data in the future about initial outlays for the systems begun in the last year or two. But operating costs remain elusive.

A major obstacle to collecting cost data is found in the federal government's financial systems, says Patrick Plunkett, one of the government's most knowledgeable experts on measuring IT performance. "Our accounting systems are not structured to collect that kind of information," says Plunkett, a senior analyst in the General Services Administration's Office of Governmentwide Policy.

For the most part, he says, computer systems are accounted for as administrative expenses or overhead, lumped in with office supplies, electric power and janitorial services. This makes it difficult to see their true costs. Agencies also report separately on their IT spending plans, as well as their contracts awarded. The agency reports add up to almost $30 billion in IT spending this year, but exactly where that money goes or what it buys is unknown. As a rule, only the largest purchases are recorded and often they are misidentified.

Since IT performance measurement is in many ways a question of costs and benefits, the lack of useful cost information hinders evaluation efforts. It's a truism that anyone can achieve first-rate performance--if he can spend an unlimited amount of money. A primary issue in performance measurement is weighing resources expended against results achieved.

One reason agencies have difficulty accounting for IT is that financial management systems usually operate separately from the rest of an agency's computer systems. Agencies that have moved toward combined systems--as the Federal Housing Administration has done with its FHA Mortgage Insurance Systems program--are likely to see two benefits: greater IT efficiency and more useful reporting for decision-makers. As a rule, the fewer the systems, the better.

Even if the systems themselves are not actually combined, new data warehousing technologies allow information from disparate systems to be merged and compared. At the Patent and Trademark Office, the comptroller and chief information officer are collaborating on development of a data warehouse.

Another difficulty in accounting for IT costs is that computers and networks may do multiple tasks. If an office acquires a PC network for a specific job--processing incoming correspondence, perhaps, or preparing financial reports--it's easy enough to charge the cost against that activity. But if additional users begin using the same network for preparing the agency budget or supporting another agency program, how should the network costs be allocated? In many agencies, computers are used for purposes other than the purposes that justified the initial purchase. Measuring costs and benefits thus becomes even more difficult.

There's light at the end of that tunnel, however. In the last year or so, many people have endorsed the notion of an IT architecture that distinguishes between common user services and program-specific capabilities and functions.

In this view, virtually every federal employee with a desk gets a networked computer, just as he or she gets a chair, a telephone, electric light and access to the restroom. The PC, the basic multipurpose software (operating system, electronic mail, World Wide Web browser and perhaps office applications such as word processing), shared printers and the pipes that connect the PC to the larger world are regarded as infrastructure, just like the telephone system. The user's employer probably pays into an agency fund a flat fee for basic computing services, and extra charges are assessed for extraordinary services.

In addition, the office or program pays for the special servers, software, communications services and other IT expenditures required to support a specific program. For instance, if the Army Corps of Engineers' Marine Design Center needs special software tools and printers for making preliminary drawings of new boats, the center pays for that hardware and software. It's not a perfect scheme, because there always will be gray areas, and it's still evolving. But it seems to be an important step toward untangling IT budget, cost and management issues.

Strategic Resource

This architectural scheme helps answer a thorny question: Is information technology a support function or a strategic resource? That question often has been answered by labeling IT a support function, which explains its low visibility in budgets. But once IT is broken out into infrastructure and mission components, it can be understood as both support function and strategic resource. Specific systems and their ingredients belong in one category or the other, according to their roles in the architecture.

Performance measures should be different for the two categories, simplifying to some extent what has been a knotty set of problems. The infrastructure should be judged primarily for its cost-effectiveness in comparison with other such systems, along with its robustness and reliability. Mission-specific systems should be evaluated for their fit with the mission and their contribution to its achievement.

In order to evaluate the performance of mission-specific systems, the mission must first of all be clear--something that is not always the case at federal agencies, as elsewhere. Many observers say that as agencies hone their top-level Results Act performance plans and objectives and develop their tactical plans, the role of IT and other contributing elements within the agencies can be clarified. "The agencies' strategic plans often are not detailed enough to relate them directly to IT," says one OMB official.

As agencies flesh out their plans in successive years, they are looking more carefully at how IT and other resources contribute to their mission. "In the past, it was sometimes hard to tie IT to strategic intent," says Edward F. Burke, an Andersen Consulting partner who works with government agencies on Results Act compliance and related issues. Now, he says, agencies must move quickly through the phase where the links between strategies and IT are manifest. "We've found it's not enough for IT to be tied to strategy," he explains. Instead, agencies should consider new strategies that are IT-enabled.

In the next few years, Burke says, technology will make it possible for agencies to accomplish objectives that were not realistic before. As one example, the IRS aims to eliminate much of the data entry that has been one of its most resource-intensive func- tions. The service will get more companies and individuals to file tax returns electronically. Not only will IRS dispense with the initial data entry, but also with many of the edits and errors. Inexpensive tax return preparation software shows filers their errors before the returns are submitted.

With IT, corporations such as FedEx, Amazon.com, American Express and Dell Computer Corp. are transforming the way they relate to their customers. GSA's Plunkett says federal agencies must do likewise for government to win the respect of the American public. "The bar has been raised" for expectations about the speed and quality of service, Plunkett says, and IT is the key to meeting those higher expectations.

One implication of the move toward IT-enabled strategies: Technologically savvy individuals need to be involved in developing strategies and policies. That's why Clinger-Cohen requires major agencies to have chief information officers who report to the secretary or an equivalent agency head.

Although IT's role is increasingly strategic, it has a more mundane role in the Results Act environment as well. IT generates many of the reports that agency programs need to evaluate their own performance. For example, managers of the Agriculture Department's Food Stamp Program nationwide rely on information from the Food Stamp Program Integrated Information System (IIS), which tracks food stamp issuance and participation. IIS data and data from other USDA Food and Nutrition Service programs is combined in the National Data Bank, a reporting system whose primary customers are managers and analysts in departmental and congressional offices. Other users of the National Data Bank include universities, research groups and private citizens, who can get the data on paper or online.

Asked how the Food and Nutrition Service assesses the performance of the National Data Bank with respect to customer satisfaction, officials replied only that they generally get compliments on the system and users outside government "appear to find the information provided to be adequate."

Customer satisfaction surveys often are pointed to as a best practice that's used in the private sector and should be used more in government. Some agencies are using such surveys, but seldom in IT and even more rarely when the primary customers are within government, as is the case with FNS' National Data Bank.

IT managers are getting acquainted with the notion of internal customer surveys and other performance measures, however, as they consider whether "seat management" makes sense for their agencies. Seat management is a form of outsourcing that entails treating desktop computing and local PC networks as a service to be provided on a per-seat, or per-user, basis.

Agencies are contracting for office computing resources for a fixed price per seat per year. To do so, they need to determine what they are spending currently and what levels of service they are providing. For example, how often should desktop computers be replaced, and what length of time is acceptable for a user to go without a computer that has broken down? Not all aspects of PC support are quantifiable, so customer satisfaction surveys often are used in assessing whether a seat management contractor is providing acceptable service.

Many IT managers are more familiar with measuring system operating performance than they are with outcomes. They are happy to provide extensive data about how many millions of instructions per second (MIPS) their mainframes can process, how many millions or trillions of bytes of data they have stored online, and how fast the stored data can be retrieved, down to the millisecond. It's a technical discipline and one that has needed metrics to help in making difficult purchase decisions. Clearly a faster disk drive is preferable, all else being equal.

But not all of IT has lent itself to metrics. For years, managers have struggled with assessing programmer productivity and performance, arguing about how to quantify the amount of code a programmer produced and its quality. Elaborate techniques have been devised for running the code on a computer and deducting credit for the bugs that turned up. However, few, if any, of those schemes could measure the overall value of the software being produced, and some turned out to be counterproductive because they encouraged programmers to write inefficient, unnecessarily complex programs instead of ones that were short and simple.

Managers trained to focus on these kinds of concerns have had difficulty making the transition to the kind of performance measurement contemplated in the Results Act, where activities are valued according to their role in achieving the agency's mission and strategic objectives. "System performance is not the goal," says Harrison Fox, a specialist on the staff of the House Government Management, Information and Technology Subcommittee. "It's a way of telling you if you're being successful."

Best-Value Choices

But the trend to employ contractors for IT work and buy packaged software is turning many IT managers into contract managers who are answerable to their superiors for results. If they no longer employ many programmers but instead buy commercial, off-the-shelf software and contract for its customization and installation, managers need not worry about programmer productivity. Instead, they can look at the alternative systems on the market and choose the one that offers the best value.

IT managers cannot make these best-value selections on their own, however. One hallmark of the Clinger-Cohen era is the now-widespread practice of using IT investment review boards to evaluate proposals for IT spending. These boards, consisting of senior officials representing agency management, program management, financial management, IT management and perhaps others, meet periodically to seek consensus on IT capital investment priorities and resource allocation. Agencies have found them useful in many ways. For example, such a board can help programs leverage their limited resources by sharing systems.

But too much focus on plans for new systems can result in neglect of the continuing costs of operating and maintaining systems. Systems acquisition was a major headache for the federal government for more than a decade. A great deal of high-level effort was needed to bring about the legislative, regulatory and organizational changes that have allowed IT buyers to move toward commercial-style practices. Even though most of the systemic acquisition problems no longer are plaguing agencies, procurements still receive the lion's share of attention in some organizations, leaving unresolved any operations and maintenance issues for systems already in place. This means some opportunities to improve IT performance are overlooked.

One way to avoid excessive emphasis on new systems is to adopt a systems portfolio management approach. Agencies such as the Energy Department and the Defense Department's Health Affairs unit are using this approach, recommended by many authorities. Energy was the lead agency for development of the Information Technology Investment Portfolio System (I-TIPS), a software program that helps agencies inventory and manage their systems. Many other agencies are using I-TIPS, although the Chief Information Officers Council last year rejected a proposal to adopt it as a mandatory federal standard.

In a mid-1998 survey of federal IT managers by the Association for Federal Information Resources Management, "measuring IT contribution to mission performance" emerged as the most important challenge facing the 76 respondents. Association officials said the CIOs they surveyed apparently are struggling both with the high-level issues of aligning goals and assessing results, on the one hand, and the technical issues of measurement, on the other.

OMB does not require agencies to use specific IT metrics, nor is it expected to do so soon. Given the wide variation in agencies' missions, programs and IT strategies, as well as in the installed base of systems, OMB officials say it is difficult to compare IT performance across agencies or find common metrics that would facilitate such comparisons. This stance, however, is somewhat at odds with the prevailing interest in benchmarking--that is, identifying a solid performer that can be used as a standard of measurement. In fact, the Clinger-Cohen Act says OMB "shall compare the performances of the executive agencies in using information technology and shall disseminate the comparisons to the heads of the executive agencies." By all accounts, this requirement has taken a back seat to year 2000 crisis avoidance.

In the end, the simplest and most obvious measures may be the best. One official who has reviewed many IT plans says there is a clear relationship between the agencies that are lagging in their year 2000 repairs and the agencies with histories of poor IT management. Meanwhile, many of the new systems being developed and installed for agencies still turn out to be more expensive, more time-consuming and less functional than promised.

In one such case, the Senate Appropriations Committee criticized the IRS last year for letting its $321 million service center consolidation program run 10 months late and 12 percent over budget. The Senate's response was to label the program "a performance-based project" and decree that the fiscal 1999 costs and any overruns would have to be paid for through program savings. If this should signal the start of a trend, IT performance measurement could take root in agencies much faster than it has to date.

GPP report card

Return to GPP home

Information Technology
Management Grades
SSA A
FNS A
EPA B
FDA B
FEMA B
FHA B
FSIS B
OSHA B
VHA B
Customs C
FAA C
INS C
PTO C
HCFA D
IRS D
Rating Criteria
  • Information technology systems useful to and output utilized by managers.
  • Coordination of technology from the center of the organization.
  • Multiyear IT planning process in place.
  • Responsiveness of the acquisition and development process to the needs of users.
  • Well-trained IT staff and users.
  • IT system costs are justified by benefits the systems deliver.
  • Citizens and other stakeholders have appropriate access to information, but privacy and security are ensured.
Best Practices
Cut across stovepipe systems boundaries and aggregate information to increase its utility. For example, the Environmental Protection Agency has built an award-winning data warehouse that cross-correlates pollution enforcement information and publishes it on the World Wide Web for use by EPA employees and the public.

Manage information systems as important assets. Y2K has shown that most agencies didn't know much about their systems and the role each system played in the agency's operations. Agencies, such as the Social Security Administration, that had adopted a "portfolio" approach to systems management were in better positions to modify their systems methodically.

Use IT as a strategic enabler and invest in systems with direct effects on mission achievement. The Immigration and Naturalization Service has won plaudits for its use of technology to speed international travelers through airports, and the Food and Drug Administration has done likewise with drug imports.

Embrace electronic commerce, broadly defined. Agencies such as the Federal Housing Administration are speeding up processing and cutting costs by automating their business transactions. The FHA Connection handles 20,000 transactions a day. Such systems can be the best way to maintain or improve services without adding employees.

Share data across agency boundaries. SSA is reducing fraud and overpayments by getting online lists of prison inmates, veterans benefit recipients, recent deaths, and unemployment benefits claimants, for example. The bottom-line impact of matching this data against SSA benefits claims far outweighs the costs.