Machine Morality: Unwanted Bias in AI, Explained

Introducing Machine Morality, a new podcast from Esri and GovExec’s Studio 2G, where we get to the bottom of some of government’s biggest ethical AI challenges. In this pilot episode, hear from experts on AI and ethics as they discuss how defense and intelligence leaders can strategically implement the latest AI tools and technologies, while ensuring the technology is used in a way that serves all populations fairly and equally.

Presented by Esri Esri's logo

Today’s government agencies provide quality experiences and services to their constituents. More and more, that requires the implementation of AI and automated tools, from chatbots and virtual assistants to enhanced mapping and monitoring capabilities. These innovations empower government agencies to do more with less, and more importantly, provide citizens and staff with services where and when they need them. 

But there’s a bit of a caveat here. While AI has all this potential, it also comes with a number of risks and challenges. Incomplete data sets and human error during the data training process can lead to biased algorithms. 

If we’re not careful, AI can end up doing more harm than good. So, how can government agencies prevent these biases while continuing to innovate? 

Introducing Machine Morality, a new podcast from Esri and GovExec’s Studio 2G, where we get to the bottom of some of government’s biggest ethical AI challenges. In this pilot episode, experts on AI and ethics discuss how defense and intelligence leaders can strategically implement the latest AI tools and technologies, while ensuring the technology is used in a way that serves all populations fairly and equally. 

Listen to their conversation by clicking on the full podcast episode below. And be sure to download and subscribe on Apple Podcasts, Spotify or SoundCloud to take Machine Morality with you on your favorite device. 

This content was produced by GovExec’s Studio 2G and made possible by our sponsor(s). The editorial staff of GovExec was not involved in its preparation.