A firefighter walks near the North Umpqua River in Oregon in September.

A firefighter walks near the North Umpqua River in Oregon in September. U.S. Forest Service

To Manage Disaster Data, Use ‘Fog’ of the Cloud

Cell phones, drones, and security cameras generate tons of data during disasters. A new approach splits its processing between the cloud and its "fog."

A new visual cloud computing architecture could streamline the processing of visual and electronic data during disasters—which could mean life or death for survivors.

Visual data from numerous security cameras, personal mobile devices, and aerial video provide useful data for first responders and law enforcement. That data can be critical in terms of knowing where to send emergency personnel and resources, tracking suspects in disasters caused by people, or detecting hazardous materials.

This abundance of visual data, especially high-resolution video streams, is difficult to process even under normal circumstances. But in a disaster situation, the computing and networking resources needed to process it may not be available at the desired capacity in the general vicinity of the disaster. The question then becomes what is the most efficient way to process the most necessary data and how to quickly present the most relevant visual situational awareness for first responders and law enforcement?

The research team proposed a collection, computation, and consumption architecture, linking devices at the network-edge of the cloud processing system, or “fog,” with scalable computation and big data in the core of the cloud. Visual information flows from the collection fog—the disaster site—to the cloud and finally to the consumption fog—the devices being used by first responders, emergency personnel, and law enforcement.

The system works similarly to the way mobile cloud services are provided by Apple, Amazon, Google, Facebook, and others.

“It works just like we do now with Siri,” says Kannappan Palaniappan, associate professor in the University of Missouri’s computer science department. “You just say, ‘Find me a pizza place.’ What happens is the voice signal goes to Apple’s cloud, processes the information, and sends it back to you. Currently we can’t do the same with rich visual data because the communication bandwidth requirements may be too high or the network infrastructure may be down [in a disaster situation].”

The workflow of visual data processing is only one part of the equation, however. In disaster scenarios, the amount of data generated could create a bottleneck in the network.

“The problem really is networking,” says assistant professor Prasad Calyam. “How do you connect back into the cloud and make decisions because the infrastructure as we know it will not be the same? No street signs, no network, and with cell phones, everybody’s calling to say they’re okay on the same channel. There are challenging network management problems to pertinently import visual data from the incident scene and deliver visual situational awareness.”

The answer to that problem is algorithms designed to determine what information needs to be processed by the cloud and what can be processed on local devices, such as laptops and mobile devices, spreading out the processing over multiple devices. The team also developed an algorithm to aggregate similar information to limit redundancy.

“Let’s say you’re taking pictures of crowds say from surveillance cameras because it’s a law-enforcement type of event,” Palaniappan says. “There could be thousands of such photos and videos being taken. Should you transmit terabytes of data?

“What you’re seeing is often from overlapping cameras. I don’t need to send two separate pictures; I send the distinctive parts. That mosaic stitching happens in the periphery or edge of the network to limit the amount of data that needs to be sent. This would be a natural way of compressing visual data without losing information.

“To accomplish this needs clever algorithms to determine what types of visual processing to perform in the edge or fog network, and what data and computation should be done in the core cloud using resources from multiple service providers in a seamless way.”

The work appears in the journal IEEE Transactions on Circuits and Systems for Video Technology. Funding for the project came from a combination of ongoing grants from the National Science Foundation, Air Force Research Laboratory, and the US National Academies Jefferson Science Fellowship.

Source: University of Missouri

This article was originally published in Futurity. Edits have been made to this republication. It has been republished under the Attribution 4.0 International license.