GAO, State and Defense department officials discussed the benefits and responsibilities of artificial intelligence in their operations.

GAO, State and Defense department officials discussed the benefits and responsibilities of artificial intelligence in their operations. TEK IMAGE/SCIENCE PHOTO LIBRARY/ Getty Images

Where agencies are turning to automation to augment human work

Tech policy leadership discussed ongoing federal use cases for automated systems, emphasizing responsible implementation and agency-specific training. 

Artificial intelligence stands to create positive change and reduce “noise” across a range of public sector use cases, but federal leaders are tempering these advances with responsible implementation as the technology continues to outpace formal legislation.

Speaking during a FedInsider digital discussion, tech policy leadership at the Government Accountability Office and the State and Defense departments talked about some of the ways their offices are using — and policing — AI systems.

“We're all impacted by AI whether we're in a commercial sector or where the federal government,” Taka Ariga, the inaugural chief data scientist at GAO and director of the agency’s Innovation Lab, said Wednesday. He specified that AI and machine learning technologies are helpful in delegating pattern-heavy tasks.

“AI is really good about identifying computer vision patterns and trends that usually humans are not really good at,” he said. At GAO, using large language models to help the office’s existing audit methodology is a key task that requires training the algorithms correctly.

“As we approach data literacy and digital literacy type of training, we are very specific in terms of how can these types of AI or machine learning capability can be used in the right audit context,” he said. “We take a lot of pre-trained, large language models, and then we further train them on GAO-specific language.”

Adopting AI technologies to help expedite tasks is also happening within State and Defense. Giorleny Altamirano Rayo, State’s chief data scientist and responsible AI official, said that the department is using AI to assist in its cable declassification project.

“We're working on several natural language processing projects to automate the review and analysis of these unstructured data,” she said. This consists of automating the typical process of foreign policy expert staff combing through sensitive documents and emails to identify critical information.

Altamirano Rayo said that this automation will ensure a more thorough and accurate review process so State Department foreign policy officials and staff can draw more insights in a shorter time span.

“It applies machine learning to augment what the humans are doing,” she said. “We're really augmenting the work of the reviewer.”

Underpinning Altamirano Rayo and State’s work is an adherence to current federal guidance surrounding the deployment and use of AI systems. In addition to following President Joe Biden’s executive order on trustworthy AI, State also launched its first Foreign Affairs Manual Chapter on AI and it's developing more enterprise guidance.

“We are very mindful of the importance of adopting AI responsibly,” Altamirano Rayo said. “So the first thing we did was to ensure that we have a strategy to adopt AI in a way that will inform the practice and the management of diplomacy but in a responsible manner.”

At Defense, AI technologies have been applied to use cases like wearables to support military readiness. Jaime Fitzgibbons, the manager of the AI/ML portfolio at DOD’s Defense Innovation Unit, described one example of preventative screening to safeguard warfighter health.

“We did a prototype actually with a wearable watch and a ring, correlating information, and they were actually able to detect pre-symptomatic infections,” she said. Applying this same technology can check for preventative maintenance among aircraft and tanks.

Altamirano Rayo championed this benefit as well. 

“We really need to sift through the noise and pinpoint the signal, and just we cannot intake so many documents,” she said. “What we have to do is make sure that we can make confident predictions and make sure that humans only see those documents that really need to have higher level thinking.”