Rep. Clay Higgins, R-La., shown here at a June 2023 House committee hearing, is co-sponsoring a bill that requires agencies to notify individuals using online government services when they are interacting with artificial intelligence tools.

Rep. Clay Higgins, R-La., shown here at a June 2023 House committee hearing, is co-sponsoring a bill that requires agencies to notify individuals using online government services when they are interacting with artificial intelligence tools. Anna Moneymaker/Getty Images

House lawmakers join push for agencies to disclose public-impacting AI uses

A group of House members from both parties rolled out companion legislation to a Senate bill that would require agencies to notify individuals when they are engaging with AI tools.

Four House lawmakers are joining the push for legislation that would require federal agencies to notify individuals when the government uses artificial intelligence technologies to interact with them or make automated decisions that affect their well-being — a companion proposal to Senate legislation that a top lawmaker hopes will become law by the end of the year.

The Transparent Automated Governance Act — introduced on Dec. 22 by Rep. Clay Higgins, R-La., and co-sponsored by Reps. Randy Weber, R-Texas, Jim McGovern, D-Mass., and Don Davis, D-N.C. — would task the director of the Office of Management and Budget with issuing guidance to federal agencies on how to best alert the general public when government entities are using “automated and augmented systems” to interact with them or to “make critical decisions.”

The bill would also establish an appeals process for individuals to challenge automated decisions and address situations “in which an individual is harmed as a direct result of the use of an automated system in the augmented critical decision process.”

The proposal defines “critical decisions” as an agency determination that, in part, “meaningfully affects access to, or the cost, terms or availability of” an individuals’ education, employment, transportation, healthcare or asylum status.

“I don’t trust the government, and I don’t trust AI,” Higgins told Nextgov/FCW in a statement. “So I certainly don’t trust government bureaucracy armed with AI as an investigative or enforcement weapon.”

Similar bipartisan legislation was introduced in the Senate last June by Sens. Gary Peters, D-Mich., Mike Braun, R-Ind., and James Lankford, R-Okla. That bill passed the Senate Homeland Security and Governmental Affairs Committee — which Peters chairs — later that same month. 

“The authoritarian use of AI to control and restrict carries major implications and this legislation, as well as the companion piece in the Senate, holds the federal government accountable,” Higgins said, adding that “moving forward, we must focus on the balance between the power of AI and the potential dangers of using AI.”

The House version sponsored by Higgins is the same as the amended bill that passed Peters’ committee, which added a definition of AI to the legislative text and changed OMB’s engagement with other agencies from a required consultation to a suggested consultation. 

The bill recommends — but does not mandate — that OMB’s “transparent automated governance guidance” be drafted in consultation with the Government Accountability Office, the General Services Administration and other public and private sector entities.

A Senate committee report that favorably recommended the amended bill be passed into law noted that several agencies — including the Social Security Administration, the Federal Communications Commission and the Consumer Financial Protection Bureau — are already developing or increasingly using AI and autonomous tools for public-facing purposes, but that these early uses of the technology have not come without their challenges. 

The report noted that Customs and Border Protection’s mobile application for migrants to apply for asylum at the U.S.-Mexico border — which uses facial biometrics — “failed to register many people with darker skin tones, effectively barring them from their right to request entry.” In another instance, lawmakers noted that “algorithms deployed across at least a dozen states to decide who is eligible for Medicaid benefits erroneously stripped critical assistance from thousands of Americans who relied on disability benefits.”

In a statement to Nextgov/FCW, Peters said “as the federal government uses artificial intelligence more and more to help provide critical services to the public, there must be transparency so that people know when they are interacting with AI and when AI is making decisions that will impact them.”

A spokesperson for Peters said the Senate bill will not require floor time and the hope is it will pass by unanimous consent or as part of a larger legislative package before the end of the 118th Congress. They added that Peters’ office has been in touch with Higgins’ office about the House companion bill.

“I appreciate my House colleagues for joining my effort to move this bipartisan bill to make it easier for Americans to understand how AI is being used by the government, and guarantee that a human being will review AI-generated decisions if needed,” Peters said.