Senators Kamala Harris and Cory Booker both signed letters to federal agencies asking about AI bias.

Senators Kamala Harris and Cory Booker both signed letters to federal agencies asking about AI bias. J. Scott Applewhite/AP

Senators Are Asking Whether Artificial Intelligence Could Violate U.S. Civil Rights Laws

Senators are pressuring government agencies to study bias in artificial intelligence.

Seven members of Congress have sent letters to the Federal Trade Commission (pdf), Federal Bureau of Investigation (pdf), and Equal Employment Opportunity Commission (pdf) asking whether the agencies have vetted the potential biases of artificial intelligence algorithms being used for commerce, surveillance, and hiring.

“We are concerned by the mounting evidence that these technologies can perpetuate gender, racial, age, and other biases,” a letter to the FTC says. “As a result, their use may violate civil rights laws and could be unfair and deceptive.”

The letters request that the agencies respond by the end of September with complaints they’ve received of unfair use of facial recognition or artificial intelligence, as well as details on how these algorithms are tested for fairness before being implemented by the government.

In the letter to the EEOC, senators Kamala Harris, Patty Murray, and Elizabeth Warren specifically ask the agency to determine whether this technology could violate the Civil Rights Act of 1964, the Equal Pay Act of 1963, or the Americans with Disabilities Act of 1990.

This isn’t the first the US Congress has shown interest in the potential biases of artificial intelligence. Earlier this year, during a series of hearings on the technology, Charles Isbell, executive associate dean at the Georgia Institute of Technology, testified about the biases he’s seen for nearly 30 years of working in AI research.

“I was breaking all of [my classmate’s] facial recognition software because apparently all the pictures they were taking were of people with significantly less melanin than I have,” Isbell said in the February hearing, referring to the pictures used to train AI to detect faces during his Ph.D program in the 1990s.

Facial recognition researcher Joy Buolamwini told Quartz that this was a major step in alerting federal agencies to the dangers of bias in AI.

“Government agencies will need to ramp up their ability to scrutinize AI-enabled systems for harmful bias that may go undetected under the guise of machine neutrality,” she said.

Buolamwini says that at the very least the FTC should put facial recognition on the agenda for its hearing on artificial intelligence on November 13 and 14.