Eyes on The Storm

The National Weather Service rides a tide of praise into the 2006 Atlantic hurricane season.

An urgent message from the National Weather Service office in Slidell, La., issued 20 hours before Hurricane Katrina made landfall last August, provided an eerily accurate description of the future of New Orleans: "Most of the area will be uninhabitable for weeks-perhaps longer. At least one-half of well-constructed homes will have roof and wall failure. . . . The majority of industrial buildings will become nonfunctional. . . . High-rise office and apartment buildings will sway dangerously-a few to the point of total collapse. . . . Airborne debris will be widespread-and may include heavy items such as household appliances and even light vehicles. . . . Power outages will last for weeks. . . . The vast majority of native trees will be snapped or uprooted."

Also jarringly precise was the National Hurricane Center's prediction. Around 9 the night before, the center's director, Max Mayfield, had telephoned the governors of Louisiana and Mississippi, the mayor of New Orleans and the emergency operations chief of Alabama to brief them personally about the hurricane's magnitude and potential for destruction. Mayfield had made such a call only once before in his 36-year career-in 2002, when Hurricane Lili was threatening Louisiana with 143-mph winds. "I just wanted to be able to go to sleep that night knowing that I did all I could do," Mayfield said in congressional testimony a few weeks after Katrina went ashore at Buras, La. The hurricane's storm surge flooded 80 percent of New Orleans and many neighboring parishes with as much as 20 feet of water. Katrina was the third major hurricane of the 2005 Atlantic hurricane season and possibly the largest Category 3 storm on record. It devastated communities as far as 100 miles from its center.

Multiple investigations in the aftermath of the storm produced nothing but kudos for the center and the Weather Service, its parent organization. They "passed Katrina's test with flying colors," noted the Select Bipartisan Committee to Investigate the Preparation for and Response to Hurricane Katrina in its February report, "A Failure of Initiative." Arguably one of the best-managed federal agencies, NWS is held up as one of two examples-the other being the Coast Guard-of what little went right during Katrina.

The government's hurricane specialists nailed it. Storm track projections published 56 hours before Katrina made landfall were off by only 15 miles, and the predicted strength published two days in advance of landfall was off by only 10 mph. But basking in the glow of a virtually perfect performance hasn't been enough for the 45 civil servants who staff the hurricane center on the Florida International University campus in Miami. In the weeks following the storm, reports of low morale surfaced. The psychological wounds have been slow to heal. "From a technical standpoint," says Edward N. Rappaport, NHC deputy director, "we did our job well in Katrina. But none of us is happy. More than a thousand people died."

Can it happen in 2006? The Weather Service's forecast for the six-month Atlantic hurricane season that begins June 1 isn't due until later in May. But noted hurricane researchers Philip J. Klotzbach and William M. Gray of Colorado State University say forecasters will have their hands full again this year with 17 named storms; the 1950-2000 average is 9.6. Nine of those will develop into hurricanes, and five will be intense, according to Klotzbach and Gray. They say there's an 81 percent chance that some portion of the U.S. coastline will be hit; that figure has averaged 52 percent over the last century. The 2006 probability is 64 percent along the East Coast and 47 percent along the Gulf Coast.

The National Hurricane Center is ready. It's flush with staff and money, for now. Four junior-grade hurricane specialists were hired during the off-season, boosting the agency's total to 10. Their salaries come from a year-end supplemental appropriation that's also paying for targeted research on intensity prediction, NHC's toughest scientific challenge. The specialists still are scratching their heads about Katrina and another hurricane named Wilma. Both emerged in the Caribbean and grew to Category 5 status in very little time. They weakened before making landfall. But Wilma, which struck south Florida in October, set a record for rapid intensification. It strengthened from a tropical storm to a Category 5 in less than 24 hours. A second measure of intensity is central pressure. Wilma's dropped to the lowest on record in the Atlantic basin in about 12 hours. "We don't understand fully the mechanisms that trigger or sustain that kind of explosive development," says D. L. Johnson, a retired Air Force brigadier general now directing NWS.

Managing Expectations

All the intensity questions aren't likely to be answered this year, or anytime soon. It took years of applied research for NHC staffers to get track forecasts right. As eager as they are to understand the science, they have a more pressing operational challenge-managing expectations. "To some extent, we've become a victim of our own successes in that people tend to take our forecast track verbatim and for granted. Rarely is a forecast going to be perfect. We need to emphasize to them that there is uncertainty involved," says Rappaport, who went to work at the hurricane center right out of weather school in the late 1980s. "On the other hand," he says, "our stakeholders approach us about whether we met our [1993] Government Performance and Results Act goal for this year. 'What new products are you bringing online? Did you set any records with your forecasts this year?' There are two different sets of important interests that we have to provide cautious and appropriate information to."

Hurricane forecasting has come a long way in the last half-century. Government aircraft have gathered location and intensity information since 1944. The advent of weather satellites in 1960 opened forecasters' eyes to vast expanses of ocean that airplanes couldn't reach, so no tropical disturbance goes undetected. Coastal Doppler radars have added copious data about winds and rainfall to help meteorologists understand the structure and evolution of a hurricane's inner core, the bands of thunderstorms and heavy rain spiraling inward to the calm center known as the eye. Hourly wind and wave measurements are gathered by increasing numbers of buoys strategically placed in the Atlantic, Pacific, Gulf and Caribbean waters where observations once were scarce. All these tools pump larger and larger streams of data into ever faster computers that assimilate the information that specialists use to create multiday forecasts of tropical cyclones.

Most of the forecasting progress in the past few decades has come from tremendous advances in operational numerical weather prediction, or computer modeling. It's especially true for track forecasts. The lead time for a landfall prediction has increased from less than a day in 1979 to three days or even five days today. Five-day track forecasts for Hurricane Katrina were almost as accurate as the typical three-day track forecast was in the early 1990s, according to the center. "We've reduced our errors by about 50 percent over the last generation, and we attribute essentially all that to improved model guidance that we receive," says Rappaport. Two-day track forecasts have gotten so good that the National Weather Service is tightening its GPRA goal by almost 17 percent.

This year, NHC specialists will be challenged to pinpoint landfalls within 127 statute miles on average. They did so within 108 to 114 miles in the past two seasons, but those were populated by relatively well-behaved storms. Most of them weren't loopers like Ophelia, the eighth Atlantic hurricane of 2005. That Category 1 storm meandered off the East Coast for 17 days in September, confounding forecasters with its slow, erratic motion. Its unusually large eye remained offshore, but Florida, North Carolina, Massachusetts and Canada felt its effects.

Despite the improvements in hurricane forecasting, some experts think it still has a long way to go. "Hurricanes, by nature, are notoriously difficult to predict," says Keith Blackwell, a hurricane specialist at the Coastal Weather Research Center of the University of South Alabama. The Mobile scholar-scientist testified before the Disaster Prevention and Prediction Subcommittee of the Senate Commerce, Science and Transportation Committee in September 2005. "There are many aspects of hurricane forecasting in which we display little skill," he told lawmakers.

As he spoke, Tropical Storm Rita was churning up to hurricane strength about 100 miles southeast of Key West. Rita became the fourth most intense hurricane on record in the Atlantic Basin. It reached Category 5, with top sustained winds of 180 mph, over the central Gulf of Mexico, but weakened before making landfall in extreme southwestern Louisiana, near the Texas border, as a 115-mph Category 3 storm on Sept. 24.

Blackwell took issue with the accuracy of the National Hurricane Center's five-day forecasts and the dependability of the Saffir-Simpson scale that Western Hemisphere meteorologists use to gauge the ferocity of tropical storms. The scale, a 1-to-5 rating based on intensity, is used to estimate potential property damage and flooding expected with landfall. Wind speed is the determining factor. But Blackwell argued the scale doesn't represent the true impact because it doesn't estimate the size of a hurricane or its associated storm surge and rainfall.

He also expressed concern that five-day track forecasts, which aren't as accurate as the government would like them to be, can lead to cynicism, mistrust and even inaction on the public's part. "Theoretically, with these longer range forecasts, communities and the public have greater lead time in order to begin preparing," Blackwell said. "However, I'm not so sure that the vast majority of the public has the confidence necessary in these multiday forecasts to motivate them to begin early preparations."

The Hot Wash

Johnson disagrees: "The U.S. Navy seems pretty happy with it." NHC introduced five-day forecasts in 2003. They were developed a year or two earlier for the Navy, which wanted an extra couple of days' notice to move its ships when bad weather was approaching, and NWS officials thought their accuracy had improved enough to take them public. The forecasts resulted from an annual review that Johnson calls a "hot wash." It plays out in a series of meetings at the end of each Atlantic season. "We have a built-in process for continual improvement," he says. A conference for experts within the Weather Service's parent organization, the National Oceanic and Atmospheric Administration, is followed by an interdepartmental hurricane conference that brings together NOAA and all its federal partners.

That is followed by a weeklong gathering of all the countries in World Meteorological Organization Region IV, the National Hurricane Center's main area of responsibility. On a map, that's a rough rectangle stretching from the equator to the northernmost reaches of Canada and from the Pacific Ocean east of Hawaii to the Atlantic coast of Africa. In the next hurricane season, NHC will take a crack at many of the ideas and suggestions presented in these sessions and other forums.

Five-day forecasts were a substantial change, but most others are small. "We're looking at evolutionary changes and tweaking what we consider to be a pretty good job and what others consider to be a pretty good job," says Johnson.

In 2004, NHC emphasized the uncertainty in forecasts. Forecasters repeatedly urged people in a hurricane's path to ignore the "skinny black line" on track maps and focus instead on the teardrop shape showing the much broader potential strike zone.

Last year, NHC dabbled in probabilities. Forecasters used an experimental computer program to determine the chances of landfall in a given area and to let Web site users see the likelihood of their own homes experiencing winds of tropical storm or hurricane force in the next five days. The tool was so well received that it will not carry the experimental designation this season. It's the first product of the U.S. Weather Research Program's Joint Hurricane Testbed, a multiagency federal effort designed to speed the transfer of promising hurricane research into forecast operations.

Also new for 2006 are staff and equipment upgrades. A venerable computer model for track forecasts is being improved, and a new one is being tested. NOAA and the Federal Emergency Management Agency regularly host workshops for local emergency managers in the off-season; this year, they trained the trainers in hopes of spreading faster the word about hurricanes, forecast products and associated preparedness and response activities.

NHC's four new hurricane specialists will augment operations during the season, expand off-season outreach and participate in hurricane testbed activities. NOAA has three hurricane-hunting research airplanes, but most of the airborne observing that ends up in a forecast is done for the agency by the 53rd Weather Reconnaissance Squadron, an Air Force Reserve unit at Keesler Air Force Base in Mississippi. The Hurricane Hunters' fleet of propeller-driven WC-130 Hercules aircraft flies through tropical storms and hurricanes to take meteorological measurements in the vortex. Toward the end of the 2006 season, the squadron will begin outfitting the fleet with a new sensor developed aboard NOAA's planes to take highly accurate measurements of surface wind speeds.

Changes on the horizon include a possible replacement of the Saffir-Simpson scale that Blackwell dislikes. There was so much criticism of it after Katrina that the center has begun evaluating different approaches that might better describe the destructive potential of hurricanes.

Storm Warning

The most serious challenge to hurricane forecasting is one Max Mayfield and D. L. Johnson can do nothing about. Neither is a member of the executive committee of federal officials overseeing development of the technologically troubled National Polar-Orbiting Environmental Satellite System.

NPOESS is a constellation of six satellites designed to gather more accurate atmospheric and oceanographic data with the promise of vast improvements in long-range forecasting for hurricanes and other severe weather. Targeted for launch in early 2012, NPOESS would replace two aging, independently operated military and civilian weather satellite systems, thus eliminating duplication of effort. In January, the program breached a cost growth threshold of 25 percent set by the 1982 Nunn-McCurdy Act. That triggered a review that could lead to the program's cancellation after the results are presented in June.

NPOESS is three years behind schedule, and costs are running more than $3 billion over original estimates of $6.5 billion. Prime contractor Northrop Grumman lays most of the blame on continuing problems with the development of two instruments in a suite of 13 advanced sensors each satellite is to carry. But the Government Accountability Office also blamed poor contractor performance and program management in an assessment (GAO-06-391) made public in March.

The fact that the first operational model will be orbited later than hoped is heightening concerns about a potential three-year gap in weather observations. The last of NOAA's older polar satellites is set for launch in December 2007. It was dropped and damaged during tests at a Lockheed Martin factory in 2003. The mishap might have increased the satellite's chance of failure, according to NOAA. If it does quit once delivered to orbit, meteorologists could be confronted with insufficient satellite coverage as early as 2011. Polar-orbiting platforms provide more than 90 percent of the raw data used in civilian and military digital weather prediction models.

For now, the hurricane center and the Weather Service are focused on extracting as much as possible from the resources they have. Johnson acknowledges that "budget realities" have challenged the agencies to maintain current levels of research and observations and to infuse that science into forecasting. "We did a great job in 2005. You hear people say, 'You don't need more money; you're at the top of your game.' But any coach will tell you, you're only as good as your last game," he says. "We're only as good as our last forecast."

NEXT STORY: Congressional Outreach