Bytes vs. Brains

Sometimes computers make better decisions than people.

In 1997, the Marine Corps created a panel to investigate the most dangerous plane in the military fleet. The Harrier, a British-designed fighter jet with the remarkable capability of taking off and landing vertically, had racked up an atrocious safety record since the Marine Corps first bought it in 1971. Dozens of marines had died in nearly 150 noncombat accidents. Along with a slew of mechanical and other problems, the panel found that insufficient maintenance and pilot training caused some of the crashes. Pilots were averaging about half the flight hours needed to keep current on handling a plane one Pentagon official called "the nastiest horse in the rodeo." And the Harrier's formidable maintenance requirements-about 25 hours of work for each hour of flight, according to the Associated Press-weren't always met.

Part of the problem stemmed from the complexity of deciding which pilot and plane should fly on each mission. When commanders created flight schedules, they coordinated a range of factors. Flying with night-vision goggles, for example, requires extra training, so commanders had to account for a pilot's qualifications, the time of the flight, and the time of sunrise and sunset. They also had to coordinate with the maintenance crew to ensure the plane would be ready.

Choosing the safest and most efficient schedule for an entire squadron became a logistical headache. "There was too much data for a commander to sift through," says Lt. Col. Alan Pratt, a Marine Corps Systems Command project officer. As a result, commanders sometimes overlooked important information, such as how many hours each pilot had flown in a given month. With another type of plane, such oversights might not have mattered as much; a commander's best approximation might have been good enough. But the demanding and temperamental Harriers raised the stakes. They left no room for human error.

The Harrier Review Panel determined that the Marine Corps needed better information systems to prevent maintenance and training lapses. The Marine Corps now is testing a system called the Coherent Analytical Computing Environment. It does quite a few things, such as providing a central repository for information on pilots and planes, and helping operations and maintenance share information. Perhaps most important, CACE can consider more than 26,000 factors and produce a flight plan for six months. Commanders can enter criteria-for instance, that they want a new lieutenant trained within four months-and the system will find a solution that meets the parameters. By letting the system select the schedule, "You've got the right pilot with the right equipment flying the right mission," Pratt says.

CACE is an example of what some experts call "decision automation," the use of computer systems to analyze information and produce decisions. Others might call CACE "decision support," because it requires a person to sign off on a recommendation, but the line between systems that make decisions and systems that recommend decisions blurs easily.

Decision automation has become indispensable in parts of the private sector and government because of computers' ability to process large volumes of information and to crunch data quickly. The banking industry uses decision automation to approve or reject loan applications. Airlines use it to set prices and schedules. And as the technology has become more flexible, government agencies have been relying on it more and more to manage logistics, detect fraud or security threats, and evaluate applications for government programs.

Computers, it seems, are better than people at making decisions-at least some of the time. But what are their limits, and how far are we willing to trust them? As decision automation becomes more sophisticated, how should federal agencies best put it to use?

CLONING EXPERTISE

In the best-selling book Blink (Little, Brown 2005), Malcolm Gladwell extols what he calls "the power of thinking without thinking." He describes scenarios in which snap judgments prove correct, sometimes in contrast to conclusions reached through careful research. The benefit of such instinctive thinking seems contrary to the value of decision automation, which seeks to add more data and more considerations to the picture. Gladwell's snap judgments are gut feelings; decision automation is factual analysis.

But Gladwell isn't writing about guesswork. The psychologist, art historians and others he profiles draw on years of training, experience and observation to make their swift assessments. In other words, they're experts. They have collected enough information in their memories to quickly and accurately recognize patterns. Gladwell even describes the part of the brain that makes rapid judgments as "a kind of giant computer that quickly and quietly processes a lot of the data we need."

As recruiters and human resource managers know, however, experts are in short supply. Take the challenge the Homeland Security Department's Customs and Border Protection bureau faces in inspecting the millions of shipping containers that come into U.S. ports each year. Because the ships carry far too many containers for federal agents to inspect, the department must decide how best to use its resources to detect security threats, drugs and other forbidden cargo. One could imagine a Gladwellian agent with an uncanny ability to home in on problematic containers. But if such agents exist, there are not enough of them.

The department uses the Automated Targeting System, which helps even the least experienced agent know with the accuracy of an expert which containers to inspect. It sifts through historical patterns and current intelligence to direct agents to the containers most likely to hold dangerous or illegal cargo, based on characteristics such as the container's point of origin and declared contents.

With ATS, Customs and Border Patrol agents have the benefit of more information than a human mind could handle. Researchers say decisions based on data tend to be more accurate. "There's a fair amount of research suggesting that if you can bring data and analytics to the table, the decisions will probably be of higher quality," says Tom Davenport, a professor at Babson College who often writes about decision automation. "But there's a fair amount of research that says humans prefer to be intuitive because it's easy and quick." One benefit of decision automation is once a system is in place, making decisions based on data becomes nearly as easy as following intuition.

In building decision systems (which are sometimes called "expert systems"), organizations seek to train computer systems to make accurate snap judgments. They do this by extracting the criteria for a decision-the collected knowledge of a group of human experts, patterns culled through data mining, the statutory requirements of a government program, or other factors-and translating them into a set of rules.

In the Marine Corps' CACE, for instance, one rule might be that each pilot must fly at least 20 hours per month. In Homeland Security's ATS, a rule could be to recommend inspection of all containers that share characteristics with containers that have been found to carry heroin.

WRITING THE RULES

Few decision automation projects have gone as smoothly as one the Small Business Administration launched in 1999. SBA established a program the previous year to encourage business development in historically underutilized business zones. The HUBZone Empowerment Contracting Program gives preference to companies from these areas when they compete for federal contracts as long as they meet certain criteria.

"The original plan [for processing applications to the program] was to follow the paper process that SBA would usually follow," says Michael P. McHale, associate administrator for the program. But the agency expected many applications for HUBZone certification. McHale didn't know how his 10-person office would meet its requirement to evaluate applications within 30 days. "We realized with our staff we could never do it," he says.

Like the Homeland Security Department, the HUBZone office needed a tool to help it do more with limited resources. But instead of a system like the Automated Targeting System, which would bring more data into decision-making, or a system like the Marine Corps' CACE that would find solutions to meet many requirements, McHale and his team needed a tool that could reduce their workload by making routine decisions for them.

They built a system that verifies whether or not firms meet the program's eligibility rules. Companies first use an Internet screening tool to determine whether they are located within a HUBZone. If so, they complete an online application. The system automatically checks whether a company matches the requirements and notifies the applicant of potential problems. It sends an assessment to analysts in the HUBZone office, who review the findings.

By automating, the HUBZone office has been able to notify companies of decisions within 20 days on average. Since 1999, the program has grown to include more than 13,000 firms. When the office stopped giving companies the option to submit paper-based applications four years ago, McHale worried that some would object. "We found just the opposite," he says. The office hasn't had a single complaint about the system.

The ease of HUBZone's automation can be explained in part by the short, simple list of eligibility criteria. Firms must be located within one of the zones and at least 35 percent of their employees must reside there. They must be owned or controlled by U.S. citizens, and qualify as small businesses. These are the kind of black-and-white questions most easily translated into terms computers can understand.

But when the criteria are more complex or less clear, translating them into rules becomes far more difficult. After HUBZone's success, for example, SBA asked McHale to help automate the application for the 8(a) Business Development and Small Disadvantaged Business certification, which helps small firms run by "socially and economically disadvantaged individuals." In the past, companies submitted a narrative explaining why they should be given the certification.

To automate the decision, SBA had to capture in yes or no questions all relevant factors. The project was successful. SBA built a system that reduces the workload, speeds the application process and ensures consistent decisions, but it was more difficult and time-consuming than the HUBZone automation.

Writing the rules for a decision requires a much higher level of expertise than making the decision on a case-by-case basis. Consider computerized grammar programs, for example. You might know when to use a semicolon. But could you write a formula to tell a computer when a semicolon is appropriate? Because of how difficult and time-consuming it can be to capture all the rules, experts say automation usually pays off only for decisions that require faster-than-human response times, such as when to make adjustments to stop a blackout from spreading across the power grid.

Sometimes the criteria we use to make judgments aren't clear at all. Gladwell writes, for example, about a tennis coach who knows when a player will double fault-but he doesn't know how he knows.

And consider the process of choosing a mate or a new car. Some Web sites have tools that ask your criteria and offer a match, but they often cause people to rethink their parameters, says Robert Neches, director of the distributed scalable systems division at USC's Information Sciences Institute, which is working on CACE, along with a number of other organizations. "The computer tells us to buy the Volvo station wagon and the first thing most people want to know is, 'How do I change my inputs so I get the red Miata, because that's what I really want,' " he says. "Computers are good at coming up with the best plan to meet your priorities, but they can't tell you what your priorities should be. A person has to do that."

STAYING IN THE LOOP

What happens when the rules change? Federal agencies such as the Social Security Administration and the Centers for Medicare and Medicaid Services have used systems to make routine decisions for many years, but updating them to reflect policy changes has been a slow and complicated process.

New business rules management software, which allows nontechnical people to revise rules, has made decision automation more flexible. Homeland Security's ATS, for instance, is designed to quickly adapt to new intelligence.

Still, decision systems can consider only what they've been programmed for. "There are very few decisions where it's possible to get all the factors into the machine and even fewer where the machine alone can weigh them," Neches says. "My experience of 20-something years is that for anything that's hard enough that people need help with it, there's usually one more factor than you've put into the machine." Another problem: The quality of the decisions depends on the quality of the data the system uses. The Government Accountability Office has criticized ATS because it relies heavily on shipping information that often isn't accurate.

For those reasons, systems that handle all but the most basic, quantifiable decisions typically stop one step short of complete automation. "If you cannot guarantee 100 percent reliability, it's better to keep the human in the decision-making loop," says Raja Parasuraman, a professor at George Mason University and an expert on decision automation. When the HUBZone system doesn't recognize a rural address, for example, an analyst will research the location.

In battlefield situations, the time between detection of a threat and action is called the sensor-to-shooter loop. Automating the decision to attack a target could shorten the loop. "There's a caveat," Parasuraman says. "If the computer is wrong, then there is a significant cost." Keeping a person in the loop to sign off on the system's recommendation is slower, but more accurate.

When organizations require review of automated decisions, they face the challenge of keeping workers trained and motivated to assess the computer's interpretation of raw data.

But automation can cause employees to lose familiarity with the many factors required to make a decision. Parasuraman's research into aviation and warfare automation shows that people also come to rely on and trust less-than-perfect systems, especially when they are under pressure. "When people are simply given a recommendation, sometimes they don't even look at the information," he says. "It's a kind of laziness." That's one shortcoming computers don't have.

NEXT STORY: Big Buyers