The U.S. Commission on Civil Rights has released its comprehensive report, “The Civil Rights Implications of the Federal Use of Facial Recognition Technology (FRT).” The report examines how federal agencies use FRT and raises concerns about accuracy, bias, and the lack of oversight protecting civil rights.
Oversight Gaps in Federal Use
Facial Recognition Technology uses biometric data to identify individuals, and its adoption has grown rapidly across both public and private sectors. Yet, the Commission found that federal regulations have not kept pace with its expansion. The report focuses on how three agencies—the Department of Justice (DOJ), Department of Homeland Security (DHS), and Department of Housing and Urban Development (HUD)—use FRT and how their practices intersect with civil rights protections.
The Commission noted that the DOJ, through the FBI and U.S. Marshals Service, relies on FRT to generate leads in criminal investigations. DHS uses it for national security and border management, while HUD employs it in public housing surveillance systems. This last use raises particular concerns, as public housing residents are disproportionately women and people of color, heightening the risk of discriminatory outcomes.
Risks of Bias and Misidentification
One of the most significant findings is the absence of clear federal laws regulating FRT. Without a legal framework, oversight is minimal, even though studies show the technology often misidentifies people of color, women, and older adults. These errors can lead to serious consequences, including wrongful arrests, unwarranted surveillance, and civil rights violations. Commission Chair Rochelle Garza emphasized the urgency of reform: “Unregulated use of facial recognition technology poses significant risks to civil rights, especially for marginalized groups who have historically borne the brunt of discriminatory practices.”
Lack of Transparency and Accountability
The report also found a lack of transparency surrounding how agencies deploy FRT. While the DOJ has interim policies—such as prohibiting arrests based solely on FRT—there is little public data on how often it is used, for what crimes, or how accurate the systems are. Commissioner Mondaire Jones described the report as a crucial step toward accountability. “This report addresses concerns about accuracy, oversight, transparency, discrimination, and access to justice,” he said.
Recommendations for Federal Action
The Commission urged Congress to direct the National Institute of Standards and Technology (NIST) to develop real-world testing protocols for FRT. These tests would measure system accuracy across demographics and help correct disparities. The report also calls for FRT vendors to provide ongoing training and system updates to prevent bias. Beyond technical fixes, the Commission recommends that federal agencies establish Chief AI Officers to oversee responsible AI use and conduct continuous audits. It also supports new legal pathways allowing individuals to seek redress if harmed by FRT misuse.
Protecting Civil Rights in Public Housing
The Commission warned that misidentification risks extend beyond law enforcement. In public housing, where FRT may be used for surveillance, errors could result in discriminatory evictions or barriers to access—potentially violating Title VI of the Civil Rights Act of 1964. These outcomes underscore the need for federal oversight to prevent civil rights violations in AI deployment. The report concludes that robust regulation, transparency, and accountability are essential to ensuring FRT supports safety and justice rather than undermining them.
Need Help?
If you have questions or concerns about any global AI reports, guidelines, regulations and laws, don’t hesitate to reach out to BABL AI. Their Audit Experts can offer valuable insight, and ensure you’re informed and compliant.


