Supatman/Getty Images

What will the federal government do with generative AI?

Federal employees are going to see AI tools show up in cloud-based productivity suites sooner rather than later, but it's not clear yet how the trending tech will impact public-facing digital services.

Federal activity in the generative AI space so far has been limited. While federal agencies have fielded more than a thousand AI use cases, they aren't yet widely leveraging the content-creation powers of ChatGPT, Google's Bard and other large language models.

Part of the reason could be a lack of direction. The White House announced in May that the Office of Management and Budget plans to unveil policy guidance on the use of AI in the federal government sometime this summer. As of this writing, agencies have the AI Bill of Rights framework and an AI Risk Management Framework from the National Institute of Standards and Technology.

“The reason we aren’t looking at, ‘Hey, are agencies meeting the requirements of that law or not, or that guidance?’ is because there are no specific requirements. It’s all aspirational,” said Kevin Walsh, a director on the Government Accountability Office's Information Technology and Cybersecurity team.

But AI will enter the federal government workstream sooner rather than later. 

Right now, Microsoft is bundling AI capabilities in its Bing search engine and Edge browser. Microsoft also recently rolled out Azure Open AI Service to government customers, which allows users to tap existing large language models to develop AI-based applications and services. Google allows Workspace administrators to add Bard functionality to its productivity suite. And agencies are looking at how to use generative AI features both internally and in public-facing digital services.

AI enters the chat

Large language models trained on text about a government program have the potential to make chatbots more helpful, said Dave Guarino, formerly of civic tech nonprofit Code for America and the U.S. Labor Department and who’s now an independent technology consultant. 

The tech could help match questions to the most similar answers in vast sets of vetted question-and-answer pairs where current chatbots don’t work well.

“How many government agencies have very large FAQs on their website, right?” Guarino said. “You can imagine these things being used not to actually generate new text in response to a person, but to take the question and say, ‘Of these 500 question-and-answer pairs that we know exist, which looks most similar to that question?’... What you’re using the large language model for in this case is just figuring out which is most relevant for them.”

The tech could also go even further. Generative AI offers the potential to go from reading a script to “generating the script in real time” to create “a more natural back-and-forth flow,” said Santiago Milian, customer experience principal at Booz Allen Hamilton.

There are no specific requirements [for AI in government]. It’s all aspirational.
— Kevin Walsh, GAO

Some agencies are already tapping into the potential of AI. The Office of Personnel Management is using AI functionality in a pilot offering to direct retirees and annuitants to agency resources and answer very basic questions. 

Crunching data

Generative AI can help solve persistent data interoperability problems, quickly recognize common data elements and perform deduplication tasks that are outside the scope of previous technologies.

Ad Hoc has already worked with one agency to use an API to connect to Google’s BERT to go through mandatory grant reporting. Ad Hoc didn't name the agency because it hasn’t received approval to talk publicly about the work. 

BERT is an open-source large language model, as are generative AI tools like ChatGPT. But it is designed to “derive meaning from large amounts of text,” as opposed to “generate new content,” explained Mark Headd, a government technology expert at Ad Hoc.

The tech helped go through unstructured data to de-dupe 30% of database entries, something that “simple methods … like exact string matching” weren't able to do, an Ad Hoc blog post says.

Straight talk

Generative AI could help the government rewrite jargon in a more understandable way. 

“There just have never been enough resources, just not enough hours in the day, not enough people to do that kind of work,” said Beth Simone Noveck, director of the Burnes Center for Social Change and GovLab at Northeastern University, chief innovation officer for New Jersey and former deputy chief technology officer in the Obama White House. “It's radically game-changing.”

OPM's top tech official recently told an audience at a technology conference that in the future the agency could leverage generative AI tools to help rewrite out-of-date job descriptions.

In the benefits space, the back-and-forth format of generative AI could help staffers find what they need in long sets of guidance, policies and rules, and even go through the logic of how to apply complicated rules, Guarino said.

Risks and opportunities

Suresh Venkatasubramanian, a computer science and data science professor at Brown University, co-authored the White House Blueprint for an AI Bill of Rights while on the staff of the Office of Science and Technology Policy. 

One risk: accuracy. These tools have been known to generate “hallucinations” not based in facts. Venkatasubramanian told Nextgov/FCW that generative AI apps have "no notion of correctness" and are "essentially predictive models to predict … text they've been trained with.”

The government will have to contend with questions about how vendors build their systems, what happens to data entered into systems and more, he said.

Steve Bennett, global government practice lead at SAS, notes that another risk is that employees may blindly embrace the promise of generative AI without understanding its pitfalls. “You do not want government employees who don’t understand the technology enough to be able to say, ‘I don’t believe that result,” he said. 

Even so, Northeastern’s Noveck offered another framing for the risks: the “opportunity cost” of not reaping the potential benefits of generative AI.

“How do we actually more proactively embrace the opportunities, not just fret about the risks,” she asked. “We're at a point in time in which rates of trust in government are at an all-time low. Worse yet, rates of trust in democracy are declining. We cannot afford to let those rates slip lower by having government be any less effective than it can be.”