Pentagon war game simulations grow more complex

Are the Pentagon's increasingly complex war game simulations better at predicting battlefield outcomes?

There are the games people play to pass the time on a rainy Sunday afternoon. And then there are the games the military plays to prepare for the next war. Once upon a time, these two were the same: Chess, after all, was invented some 1,400 years ago to teach battle tactics. More recently, during the 1930s at the U.S. Naval War College, captains literally crawled on the floor with their model ships, like little kids. But the strategies they tried out with those toys were very grown-up--and they helped to win World War II. In the past 50 years, as computers have increased vastly in power, war-game simulations have grown more complex--and ever more important in defense planning. They are particularly important today, as the new Administration conducts a high-level strategic review that is challenging every Pentagon weapon to prove its worth. "We have to justify all our new weapons systems based on specific models" of how they would perform in future conflicts, said Michael Macedonia, chief scientist at the Army's Simulation, Training, and Instrumentation Command in Orlando, Fla. "They're very, very powerful tools, and we're using them more." But these high-tech tools have limits, too. The latest computer war game may actually be as stylized and as removed from modern battlefield realities as chess was from the battles of the seventh century. The difference is that, in this age of video and visual effects, the computer looks a lot more convincing--and its seemingly precise predictions can affect how the military trains, plans, and even chooses which weapon to buy. As an example of both the power and the limits of computer models, consider one that the military calls JANUS. Originally designed to train Army officers, JANUS is now used extensively by the RAND think tank to explore how future weapons and tactics will work. It can track 3,000 tanks, aircraft--and even individual soldiers--moving over an area 200 kilometers square. It uses a database drawn from real-world equipment tests to determine who can see, shoot, and kill whom. But JANUS falls down when it comes to psychology. Its virtual soldiers never misunderstand their orders, improvise, panic, or run away. Instead, they advance stolidly along programmed paths, shooting and being shot at, until a human player inputs new orders. JANUS' technical proficiency will teach war gamers a lot about how different weapons work. But JANUS cannot instruct the players about how real human soldiers would react to those weapons. That missing piece could lead war gamers to draw the wrong conclusions altogether if they forget that actual troops won't act like JANUS' brave but stupid automatons. The limits of computer models can distort military planning. Before the 1991 Persian Gulf War, modelers accurately predicted the course of the war's air campaign, but most of them greatly overestimated U.S. casualties in the ground battle. Computers could calculate the physics of whether a U.S. stealth plane could evade Iraqi radar, but not the psychology of whether Iraqi troops would stand firm or surrender when challenged. Misled by their models, and by training simulations in which all foes fought to the death, some U.S. units advanced overcautiously--easing the escape of enough members of the elite Republican Guard to preserve Saddam Hussein's regime. "Technology is not completely predictable, but it's more predictable than human beings are," concluded Eliot Cohen, author of the comprehensive statistical study of the Gulf air war and director of strategic studies at Johns Hopkins' Paul H. Nitze School of Advanced International Studies in Washington. "The greater the human component that you have to think about, the more problematic any prediction is." And that human component grows ever greater in an age when war is no longer a conventional face-off between two massive, well-matched land armies. Instead, today's conflicts are the brushfire wars, such as those in the Balkans, where strict limits on brute force are necessary and a premium is put on good intelligence, precise strikes, swift maneuvers, and high morale. This is a vision of war that President Bush has embraced, but that traditional computer models cannot yet grasp. Nevertheless, the Pentagon is working hard to develop better games. One new approach is to use real people in the war simulations, as a way to more accurately model human behavior in a real war. By electronically linking troops out on a field exercise with a computer war game back at a headquarters, real ships at sea can pick up computer-generated aircraft on radar, and pilots flying real planes can collaborate with colleagues in simulators. This technique lets real troops interact with weapons that do not yet exist, or fight out the key battle in an otherwise virtual war, thus injecting human variability into the computer models. But as models become ever more ambitious, they also grow ever more vulnerable to the old "GIGO" problem: "Garbage in, garbage out." The most sophisticated simulation will still give the wrong answers if it starts with the wrong assumptions about how effective specific weapons are under battlefield conditions. It might seem to be an easy matter to program into a simulation the basic engineering data on how far or fast a weapon can shoot and hit its target, for example. But in truth, the military does not put its weapons through all the various permutations that a war game can envision. "There is not as much actual testing done as you would think," said civilian war-gaming guru and author Jim Dunnigan. And the data that do exist are often scattered among many different military labs across the four armed services, and thus are hard to find. So, said Dunnigan, "a lot of times, when a number is needed, it is made up." Now, a made-up number may be better than no number at all, especially if it is arrived at by experienced military war gamers. But the more complex the model, the more made-up numbers there are--and, pretty soon, the simulation becomes less and less accurate. "In too many models, if I start unfolding exactly what goes on in that black box of theirs, I'm horrified," said retired Lt. Gen. Glenn Kent, a former Air Force modeler who is now at RAND. "[With] these humongous, unwieldy, opaque models ... nobody knows quite what's going on inside of them; and when you do find out, you're not happy." So can a simulation do justice to the realities of war? Can it model both the power of the most modern weapons, and the constraints of age-old human weaknesses? Maybe one day. But a simulation doesn't need to be dead-on accurate. In fact, it's probably best that war games sometimes overestimate an enemy. An analysis of how many troops and supplies to ship overseas for a war had better not assume the enemy will flee, freeze up, or be outmaneuvered. "It'd be nice if it happens, but we're not basing the planning on that kind of hope," said E.B. Vandiver III, director of the Center for Army Analysis. But Vandiver can accurately model an enemy's collapse. On the eve of the Gulf War ground offensive, he predicted a four-to-six-day war, with fewer than a thousand U.S. Army dead and wounded. His estimate was much closer than most modelers' to the real outcome: a four-day war with 848 casualties. "A lot of that could have been luck," Vandiver chuckled, but he also based his forecast on a thorough, two-level analysis. In modeling individual battles, he carefully calculated the overall quality of the Iraqi army, not just that of its best troops. And in modeling how those battles fit together into the whole war, he assumed that the United States would successfully outmaneuver the Iraqis, "so we fight them when and where we want to." That turned out to be exactly right-but it was an assumption that Vandiver fed into his model, not a result that it fed back to him. The key insight didn't come from a computer: It came from a human mind. So here is the not-so-dirty secret, the man behind the curtain: No matter how sophisticated the computer simulation, every model rests on human judgment. Humans write the programs, input the assumptions-and often tweak the outputs to make them make sense. In many major war games, for example, opposing teams of human players enter their orders for a battle, the virtual forces clash, and then a computer model calculates the outcome. But the final arbiter of who "won" is not a computer program, but a "control group" of experienced officers and experts. Navy Capt. John Sokolowski, who runs modeling and simulation at the Joint Warfighting Center in Suffolk, Va., said, "It's a judgment call." The danger, of course, is in the bad calls. In World War II, a Japanese admiral overruled a war game that showed two aircraft carriers might be lost in an attack on Midway Island--only to lose those same ships in the actual battle. It helps when modelers make their assumptions openly, rather than burying them in arcane computer code, so that they are easier to understand and to debate. At its best, this method allows experienced professionals such as Vandiver to flesh out a model's statistical skeleton with their own understanding. Indeed, one of the Marine Corps' most popular new war games harks back to the days before computers. In the Corps' "Combat Decision Range," a senior officer unfolds a fast and furious battle scenario for a subordinate, who must quickly make correct decisions. The game provides realistic images and sounds, but the "model" is the officer's own knowledge. Said Frank Jordan, war-game director at the Marine Corps' Warfighting Lab: "All the computer does here is generate the graphics." Not just in the Corps, but throughout the military world, more and more officers and experts are turning to simpler simulations. Instead of trying to depict a whole war in all its complexity, these models break off a manageable piece and analyze it in terms simple enough that their users can actually understand. And as personal computers grow faster and easier to use, according to Dunnigan, "many people are rolling their own" by adapting commercial games to military uses, or even writing new simulation programs themselves. These simpler models will never replace the complex, large-scale simulations. But they can complement them. "Every tool has its shortcomings, something it misses," said Michele Flournoy, a former defense official now at the Center for Strategic and International Studies. "They are most helpful when they are used in combination"--as tools, not as replacements, for human thought. "The biggest problem is that people fall in love with their models," said Army scientist Macedonia. "They're only gross approximations.... But we would be foolish not to use them."