Artificial intelligence tools used to identify people in law enforcement investigations, airport security and public housing surveillance are disproportionately harming people of color and women, according to a new report by a government watchdog.
Facial recognition technology, which civil rights groups and some lawmakers have criticized for invasive privacy and inaccuracies, is increasingly being used by federal agencies with little oversight, a new investigation by the U.S. Commission on Civil Rights has found.
“Unregulated use of facial recognition technology poses serious civil rights risks, especially for vulnerable groups who have historically borne the brunt of discriminatory practices,” said Chair Rochelle Garza. “As we work to develop AI policy, we must rigorously test facial recognition technology for fairness and promptly address any disparities found between demographic groups or halt its use until those disparities are resolved.”
Rapidly evolving facial recognition tools are increasingly being deployed by law enforcement agencies, but there are no federal laws regulating their use.
At least 18 federal agencies use facial recognition technology, according to the Government Accountability Office. In addition to federal deployment, the Department of Justice has awarded $4.2 million since 2007 to local law enforcement agencies across the country for programs that at least partially used facial recognition tools, public records show.
FBI deploys facial recognition software on vast database
The 184-page report, released this month, details how federal agencies have quietly deployed facial recognition technology across the US and how it could violate civil rights. The committee specifically looked at the Departments of Justice, Homeland Security and Housing and Urban Development.
“While there is vigorous debate about the benefits and risks associated with federal use of FRT, many agencies have already adopted the technology,” the report said, adding that it could have serious consequences, including false arrests, unjust surveillance and discrimination.
Facial recognition systems use biometric software to map a person’s facial features from a photograph. The system then matches the face to a database of images to identify the person. Accuracy depends on several factors, including the quality of the algorithm and the images used. Tests have found that even the most powerful algorithms are more likely to produce false matches for certain groups, including older people, women and people of color, the commission said.
The U.S. Marshals Service uses facial recognition tools to investigate fugitives, missing children, serious crimes and protective guard duties, the committee report said, citing the Department of Justice. The Marshals Service has had a multi-year contract with facial recognition software company Clearview AI. In February 2022, some lawmakers raised concerns about the use of Clearview AI products and other facial recognition systems due to potential civil rights violations, including threats to privacy.
The FBI’s use of facial recognition technology dates back to at least 2011. The Justice Department told committee members that the agency can run its facial recognition software on a wide range of images, including arrest photographs, driver’s licenses, public social media accounts, public websites, cell phones, security camera footage, and photos stored by other law enforcement agencies.
The US Government Accountability Office has been investigating the FBI’s use of facial recognition technology since 2016. In a report eight years ago, the office concluded that the FBI “should do more to ensure privacy and accuracy.”
The Justice Department, which oversees the FBI and U.S. Marshals, issued an interim policy in December 2023 saying facial recognition should only be used for investigative leads, according to the report. The committee added that there was insufficient data on the Justice Department’s use of facial recognition to know whether that was being put into practice.
The FBI declined to comment on the report when contacted by USA TODAY. The Department of Justice and the U.S. Marshals Service also did not respond to requests for comment.
AI tools will be used for border control and immigration investigations
The Department of Homeland Security, which oversees immigration enforcement and airport security, has deployed facial recognition tools across several agencies, according to the committee’s investigation.
US Immigration and Customs Enforcement has reportedly been using facial recognition technology in searches since 2008, when it contracted with biometric defense company L-1 Identity Solutions.
The contract allows ICE to access the Rhode Island Department of Transportation’s facial recognition database to find undocumented immigrants who have been charged or convicted of crimes, the committee wrote, citing a 2022 study by Georgetown University’s Privacy & Technology Law Center.
Facial recognition technology is also used to verify people’s identities at airports, ports and pedestrian lanes at the Southwest and Northern border crossings. The report noted that in 2023, civil rights groups reported that U.S. Customs and Border Protection’s mobile app struggled to identify Black asylum seekers who wanted to book an appointment. CBP said this year that its accuracy rate for people of different ethnicities was more than 99 percent, according to the committee’s report.
Dana Gallagher, a spokesperson for the Department of Homeland Security, told USA Today that the department takes the committee’s findings seriously and that the department has been at the forefront of rigorous testing for bias.
The report said the department opened a 24,000-square-foot laboratory for testing biometric systems in 2014. Gallagher said the Maryland testing facility, which the committee visited and documented, served as “a model for testing facial recognition systems in a real-world environment.”
“In fulfilling our mission of protecting the security of the homeland and the safety of travelers, the Department of Homeland Security is committed to protecting the privacy, civil rights, and civil liberties of all individuals we interact with,” Gallagher said.
Public housing authorities deploy facial recognition tools
The committee said some public housing complexes’ surveillance cameras are equipped with facial recognition technology, leading to evictions for minor infractions – concerns lawmakers have raised since at least 2019.
The report said the U.S. Department of Housing and Urban Development did not develop any of the technology itself, but gave grants to public housing authorities that use it to purchase cameras equipped with the technology, thus “putting FRT in the hands of grantees without regulation or oversight.”
Because public housing tenants are disproportionately women and people of color, the committee warned, use of the technology could be a violation of Title VI. In April 2023, HUD announced that Emergency Safety and Security Grant funds could not be used to purchase the technology, but the report noted that there were no restrictions on its use by grantees who already own the tool.
The commission cited a May 2023 Washington Post investigation that found security cameras are being used to catch minor infractions, like smoking in the wrong place or taking a cart out of the laundry room, in order to punish residents and get them evicted. Attorneys for evicted tenants have also reported an increase in lawsuits citing security camera footage as evidence for evictions, the Post reported.
The Department of Housing and Urban Development did not respond to USA Today’s request for comment.
Civil rights groups hope report will spur policy changes
Tierra Bradford, senior program manager for justice reform at the Leadership Conference on Civil and Human Rights, told USA Today she was excited to see the report and hopes it will lead to further action.
“I think they highlight a lot of concerns that we in the judicial community have had for a long time,” Bradford said.
The U.S. criminal justice system has a history of disproportionately targeting marginalized communities, and facial recognition tools appear to be another iteration of that problem, she added.
“There should be a moratorium on a technology that is so clearly biased and has such a disproportionate impact on communities.”
National debate over facial recognition tools
The commission’s report comes after years of debate over the use of facial recognition tools in the public and private sectors.
The Detroit Police Department announced in June that it would revise its policy on how it uses the technology to solve crimes as part of a federal settlement with a Black man who was wrongfully arrested for theft in 2020 based on facial recognition software.
Last year, the Federal Trade Commission banned Rite Aid from using AI facial recognition technology, saying it was unfairly screening customers, particularly people of color and women. The FTC said the system issued red flags based on low-quality images, resulting in thousands of false positives and leading to customers being screened or kicked out of stores for crimes they didn’t commit.
In Texas, a man who was wrongfully arrested and jailed for nearly two weeks filed a lawsuit in January alleging that facial recognition software misidentified him as a suspect in a store robbery. According to the lawsuit, artificial intelligence software at a Houston Sunglass Hut store used low-quality security footage of the crime to misidentify Harvey Murphy Jr. as a suspect and issued an arrest warrant.
At the national level, members of the Civil Rights Committee said they hope the report will inform lawmakers on the use of the rapidly evolving technology. The committee is pushing for testing protocols that agencies can use to check the effectiveness, fairness and accuracy of their software. It also recommends that Congress provide “statutory mechanisms for legal relief” for people harmed by FRT.
“We hope this bipartisan report will inform public policy to address the myriad challenges surrounding artificial intelligence (AI) generally, and facial recognition technology in particular in this regard,” said Chairman Stephen Gilchrist. “Our nation has a moral and legal obligation to ensure that the civil rights and civil liberties of all Americans are protected.”