Machine Morality: How Bias Creeps Into Geospatial Intelligence

AI-powered digital maps offer visualizations and analytics many of us couldn’t imagine just a few years ago. But who is training the data used to populate these maps? And how are inherent and unconscious biases impacting their quality? We tackle these questions — and more — on this episode of Machine Morality.

Presented by Esri Esri's logo

Maps are everywhere. Whether navigating to and from the office or analyzing location-specific threats from foreign adversaries, government and defense leaders rely on geospatial intelligence systems, or GIS, to glean real-time insights that help them to make decisions. 

These AI-powered digital maps offer visualizations and analytics many of us couldn’t imagine just five or 10 years ago. But, as with all data-powered technologies, agencies need to ensure they’re tapping up-to-date, robust information to fuel their GIS tools and making use of best practices around data use and maintenance. For example, who is training the data used to populate these maps? And how are inherent and unconscious biases impacting their quality? That’s the obstacle we’ll tackle on this episode of Machine Morality, a podcast from Esri and GovExec’s Studio 2G where we get to the bottom of some of government’s biggest ethical AI challenges. On this episode, hear from defense and industry leaders who spoke at the recent webcast, ”AI and Ethics: Integrity and Geospatial Analytics.”

Listen to their conversation by clicking on the full podcast episode below. And be sure to download and subscribe on Apple Podcasts, Spotify or SoundCloud to take Machine Morality with you on your favorite device. 

This content was produced by GovExec’s Studio 2G and made possible by our sponsor(s). The editorial staff of GovExec was not involved in its preparation. 

NEXT STORY: Machine Morality: Unwanted Bias in AI, Explained