Seven F/A-18 Hornets, lower left, members of the U.S. Navy's Blue Angels flight demonstration team, approach One World Trade Center in 2013.

Seven F/A-18 Hornets, lower left, members of the U.S. Navy's Blue Angels flight demonstration team, approach One World Trade Center in 2013. Mark Lennihan/AP file photo

Viewpoint: U.S. Intelligence Needs Another Reinvention

After failing to detect the 9/11 plot, spy agencies reinvented themselves for an age of terrorism, but a new generation of technological threats requires a new round of reforms.

Eighteen years ago, al-Qaeda operatives hijacked planes, toppled buildings, terrified an entire  nation, and killed nearly 3,000 innocents. That the elaborate 9/11 plot went undetected will forever be remembered as one of the intelligence community’s worst failures. For many U.S. intelligence officials, memories of that day remain fresh, searing, and personal. Still hanging over the entrance to the CIA’s Counterterrorism Center is a sign that reads, “Today is September 12, 2001.” It’s a daily reminder of the agency’s determination to prevent future attacks—but also of the horrifying costs when intelligence agencies adapt too slowly to emerging threats.

 

For a decade after the Soviet Union’s collapse, the CIA and FBI were mired in Cold War structures, priorities, processes, and cultures even as the danger of terrorism grew. My research has shown that, even though many inside and outside U.S. intelligence agencies saw the terrorist threat coming and pressed for change years earlier, they could not get the necessary reforms enacted. The shock of 9/11 finally forced a reckoning—one that led to a string of counterterrorism successes, from foiled plots to the operation against Osama bin Laden. But now, nearly two decades later, America’s 17 intelligence agencies need to reinvent themselves once more, this time in response to an unprecedented number of breakthrough technologies that are transforming societies, politics, commerce, and the very nature of international conflict.

As former CIA Deputy Director Michael Morell and I wrote in Foreign Affairsearlier this year, the threat landscape is changing dramatically—just as it did after the Cold War—and not because of a single emerging terrorist group or a rising nation-state.  Advances in artificial intelligence, open-source internet-based computing, biotechnology, satellite miniaturization, and a host of other fields are giving adversaries new capabilities; eroding America’s intelligence lead; and placing even greater demands on intelligence agencies to separate truth from deception. But the U.S. intelligence community is not responding quickly enough to these technological changes and the challenges they are unleashing.

While 9/11 was a surprise, it should not have been. In the preceding decade, a dozen high-profile blue ribbon commissions, think tank studies, and government reports all sounded the alarm, warning about the grave new threat of terrorism and recommending urgent and far-reaching intelligence reforms to tackle it. As I documented in my book Spying Blind: The CIA, the FBI, and the Origins of 9/11, those studies issued a total of 340 recommendations that focused on crucial intelligence shortcomings such as coordination problems, human-intelligence weaknesses, and poor information-sharing within and across agencies. These were exactly the same weaknesses the 9/11 Commission ultimately identified. Yet before the attacks, almost none of the recommendations were fully implemented. The overwhelming majority, 268 to be exact, produced no action at all—not even a phone call, memo, or meeting. Nine months before the attacks, the bipartisan Hart-Rudman Commission, which conducted the most comprehensive assessment of U.S. national security challenges since the Cold War’s end, correctly predicted that America’s institutional deficiencies left the nation exceptionally vulnerable to catastrophic terrorist attack. But these and other external calls for reform went nowhere.

Reform efforts inside the FBI and CIA failed, too. Although intelligence officials repeatedly warned executive branch leaders and Congress about the terrorist threat in reports and unclassified hearings starting as early as 1994, intelligence agencies failed to overhaul themselves to better identify and stop looming terrorist dangers. The FBI, for example, declared terrorism its number one priority back in 1998. Within months, the embassy bombings in Kenya and Tanzania made “al-Qaeda” a household name in the United States. But by 9/11, the FBI still devoted only 6 percent of its personnel to counterterrorism issues. Between 1998 and 2001, counterterrorism spending remained flat, 76 percent of all field agents continued to work on criminal cases unrelated to terrorism, the number of special agents working international terrorism cases actually declined, and field agents were often diverted from counterterrorism and intellience work to cover major criminal cases. A 2002 internal FBI study found two-thirds of the Bureau’s analysts—the people who were supposed to “connect the dots” across leads and cases—were unqualified to do their jobs. And just weeks before the 9/11 attacks, a highly classified internal review of the FBI’s counterterrorism capabilities gave failing grades to every one of the bureau’s 56 U.S. field offices. Meanwhile, CIA Director George Tenet labored mightily to get more than a dozen U.S. intelligence agencies to work better together, but he faced resistance at every turn. Tenet couldn’t even succeed in getting agencies to use the same badges to enable easier access to each other’s buildings. Counterterrorism efforts remained scattered across 46 different organizations without a central strategy, budget, or coordinating mechanism.

In the run-up to the attacks, the CIA and FBI had 23 opportunities to penetrate and possibly stop the 9/11 plot. They missed all 23, for one overriding reason: Both agencies were operating as they previously had in a bygone era that gave terrorism low priority and kept information marooned in different parts of the bureaucracy.

For months, the CIA sat on information indicating that two suspected high level al-Qaeda operatives were probably inside the United States. Why didn’t anyone tell the FBI? In large part because the CIA had never been in the habit of notifying the FBI about suspected al-Qaeda operatives before. There was no formal training program or well-honed process for putting potential terrorists on a watch list or notifying other agencies about them once they entered the country. And when the agency finally did tell the FBI about these two suspected terrorists 19 days before 9/11, the bureau’s manhunt for them was labeled “routine,” assigned to a single office, and given to a junior agent who had just finished his rookie year and had never led an intelligence investigation before. This, too, wasn’t a mistake. It was standard practice. For the FBI’s entire history, catching perpetrators of past crimes was far more important than stopping a potential future disaster.

We now know that the two hijackers, Khalid al-Mihdhar and Nawaf al-Hazmi, were hiding in plain sight, using their real names in everything from the San Diego telephone directory to their bank accounts and travel documents; living for a while with an FBI informant; and contacting several targets of past and ongoing FBI counterterrorism investigations. All of this was unknown to the FBI before 9/11.

Today’s threat landscape is vastly more complex than it was in 2001. Terrorists are one item on a long list of concerns, including escalating competition and conflict with Russia and China, rising nuclear risks in North Korea, Iran, India and Pakistan, roiling instability in the Middle East, and authoritarians on the march around the world.  Supercharging all these threats are new technologies that are accelerating the spread of information on an enormous scale and making intelligence both far more important and challenging.

Now, as in the run-up to 9/11, early indicators of the coming world are evident, and the imperative for intelligence reform is clear. The first breakdown of this new era has already occurred: the intelligence community’s failure to quickly or fully understand Russia’s weaponization of social media in the 2016 American presidential election. Before the election, intelligence agencies did not clearly grasp what was happening. Since the election, the revelations keep getting worse. Thanks to investigations by the bipartisan Senate Intelligence Committee and Special Counsel Robert Mueller, we now know that Russia’s social-media influence operation started in 2014, possibly earlier, and included the dispatch of Russian intelligence operatives to the United States to study how to maximize the effectiveness of Moscow’s social media campaign to divide Americans and give one presidential candidate an advantage over another.

We also know that Russia’s deception efforts in 2016 are already looking primitive in comparison with what’s next. Thanks to advances in artificial intelligence, “deepfake” photographs, videos, and audios are becoming highly realistic, difficult to authenticate, widely available, and easy to use. In May, anonymous users doctored a video to make House Speaker Nancy Pelosi appear drunk. It went viral on Facebook. When the social-media giant refused to take it down after it went viral, two artists and a small technology startup created a deepfake of Mark Zuckerberg and posted it on Instagram. In it, the phony Zuckerberg brags, on what looks like a CBS News program, about his power to rule the world.  “Imagine this for a second: one man with total control of billions of people’s stolen data,” he says. “Whoever controls the data, controls the future.” Just last week, The Wall Street Journal reported the first known use of deepfake audio to impersonate a voice in a cyber heist. An executive at a British-based energy firm thought he was talking to his boss when in reality it was an AI-based imitation, right down to the lilt and slight German accent. The fraudulent call resulted in the transfer of $243,000.

The potential for deepfake deceptions in global politics gets scary very quickly. Imagine a realistic seeming video showing an invasion, or a clandestine nuclear program, or policymakers discussing how to rig an election. Soon, even seeing won’t be believing. Deception has always been part of espionage and warfare, but not like this.

Meanwhile, old methods of intelligence-gathering are now being democratized. Spying used to be expensive and exclusive; when satellites that intercepted signals and images from space took billions of dollars and tremendous know-how to operate, the United States could afford to maintain a clear technological advantage. Now space is becoming commercialized, with satellites so cheap that middle-schoolers can launch them. Secrets, while still important, aren’t what they used to be: When Russia invaded Ukraine, the best intelligence came from social-media photos posted by the troops. And when U.S. Navy SEALs raided Osama bin Laden’s compound in Pakistan, a local resident heard funny noises and inadvertently ended up live-tweeting the operation.

As in the 1990s, many in the intelligence community are sounding alarms and trying to make changes. A 2018 report by Michael Brown and Pavneet Singh of the Defense Intelligence Unit warned that China’s venture-capital investment in key American startup companies was designed to give China the edge in technologies for commercial and military advantage. In a report in January, Dan Coats, then the director of national intelligence, told Congress, “For 2019 and beyond, the innovations that drive military and economic competitiveness will increasingly originate outside the United States, as the overall US lead in science and technology (S&T) shrinks; the capability gap between commercial and military technologies evaporates; and foreign actors increase their efforts to acquire top talent, companies, data, and intellectual property via licit and illicit means.”

We are seeing the initial stirrings of reform. Congress created a National Security Commission on Artificial Intelligence, led by former Google parent company chairman Eric Schmidt and former deputy secretary of defense Robert O. Work, to “consider the methods and means necessary to advance the development of artificial intelligence, machine learning, and associated technologies by the United States to comprehensively address the national security and defense needs of the United States.” (Full disclosure: I am a special adviser to the commission.) And inside the intelligence community, there’s a new directorate for digital innovation in the CIA, new AI initiatives in the National Geospatial Intelligence Agency, and new cloud computing efforts in the National Security Agency.

But these efforts are nowhere near enough. What’s missing is a wholesale reimagining of intelligence for a new technological era. In the past, intelligence advantage went to the side that collected better secrets, created better technical platforms (such as billion-dollar spy satellites), and recruited better analysts to outsmart the other side. In the future, intelligence will increasingly rely on open information collected by anyone, advanced code and platforms that can be accessed online for cheap or for free, and algorithms that can process huge amounts of data faster and better than humans. This is a whole new world. The U.S. intelligence community needs a serious strategic effort to identify how American intelligence agencies can gain and sustain the edge while safeguarding civil liberties in a radically different technological landscape. The director of national intelligence should be leading that effort—but after Coats’s resignation that job has yet to be permanently filled.

Years ago, one former intelligence ruefully told me that his chief worry was: “By the time we master the al-Qaeda problem, will al-Qaeda [still] be the problem?” He was right.