The Intelligence Data Problem Has Changed

We need cognitive tools to predict and prevent threats.

The data problem faced by law enforcement and intelligence agency analysts is very different today from what it was in the past. Ten years ago, the average intelligence analyst spent most of his or her time searching a sparse landscape for the source of information that could address their questions. The problem was finding the elusive source amid a scarcity of information. 

Today, with the proliferation of social media, the Internet of Things, the dark web, blogs, email, video feeds, commercial satellite imagery, etc., the sources are readily available and apparent.  Roughly 2.5 quintillion bytes of data are created every day, and 90 percent of the world’s accumulated data has been created in the last two years. When we use our cell phones to call Uber or Lyft, pay for our coffee, check in for our flight, check the weather (or hundreds of other transactions), we create data that time and date stamps our physical location.  Video is another form of raw data that has exploded over the last 10 years. Today, millions of people post their locations, activities, and future plans on social media for all the world to see. Most of this new data is benign, but not all of it. 

The dark web has been perceived as a marketplace for illegal drugs and firearms. Many times we have heard that a mass murderer previously posted hate-based rants or other warning signs on Facebook or Twitter. We are even beginning to see crimes live-streamed on social media. New data feeds and sources such as these are being created each day. Think about where we will be in another 10 years. 

The overwhelming volume of the information available today can obscure the vital piece of information needed from the intelligence analyst’s view. Humans are good at making sense of information, but not good at processing large volumes of it. Too much information for the human brain to process today is just as limiting as not having enough information. We need our intelligence analysts evaluating and acting on good information, not expending all of their time searching for it. We must use cognitive, machine-based technology to persistently monitor this vast amount of new information, pick out the relevant pieces, and rank it so that human analysts consider the most important and impactful information first. 

By combining cognitive assistance with human expertise, we can transition from reactive to proactive investigations that allow us to predict and prevent threats. This data shift has occurred at a time when we are asking more of our law enforcement agencies, and taking advantage of the data shift might be the mechanism that enables their success. Traditional law enforcement involves a great deal of looking backward at crimes that have already occurred: a bank robbery, or a murder, or an arson for profit where the perpetrator got away. 

Solving crimes that have already occurred will always be an important part of the law enforcement mission. But increasingly, law enforcement is being asked to look forward to predict and prevent crimes so that they never occur. Think of the lone wolf attacks that have become increasingly common in the U.S. and abroad. In law enforcement circles, these attacks come to be known simply by the names of the cities where they occurred. Mention Boston, San Bernadino, Paris, Orlando, or Dallas and cops across the U.S. think of mass killings and worry that the next city name to be added to the list could be theirs. These law enforcement professionals are desperate for the tools needed to get in front of the threat, to save lives. The promise of cognitive computing is that it can manage vast volumes of information in real time, identify the indicators or “bread crumbs” leading to a crime or attack, and alert the human decision maker to act before the event occurs.

Looking forward to prevent crimes or terrorist acts has been a priority in federal law enforcement agencies for years. They have had many successes, preventing tragedies in Los Angeles, New York, and Chicago to name but a few. These successes relied heavily on informants, or human intelligence collecting, or vigilant border inspections. We can’t rely on such tactics alone. They weren’t enough to prevent every tragedy. We need something new that can sift through the mountain of data mountain that blocks our vision. Cognitive computing can piece together those interrelated snippets of information and tell a story that is not readily recognizable to humans due to the white noise of unrelated information. 

How do we make sense of too much information? By employing cognitive computing to easily bring together millions of pieces of disparate information from structured and unstructured agency data, open source reporting, the dark web, social media, sentiment analysis, seized media, video, image recognition, weather data, Internet of Things sensors, commercial satellite imagery, etc., on a 24/7 basis. Adding a cognitive capability allows the machine to “think like an analyst” to evaluate and rank information and then deliver the most important information first to the human analyst.    

A machine will never replace the human as the final evaluator of intelligence. A hybrid analysis, using the best aspects of what humans do well and what cognitive computing does well, will achieve superior results. We can be overwhelmed by the growth of the new data problem or we can master it, sort and sift it, and make it work for us in ways that could not be foreseen 10 years ago.

Chris Trainor is the IBM threat prediction and prevention leader.