UPDATE — AUGUST 2025: The U.S. is beginning to implement recommendations from the National Artificial Intelligence Advisory Committee’s Law Enforcement Subcommittee. In early 2025, the Office of Management and Budget integrated new AI oversight rules into federal agency guidance Also, the Department of Justice launched pilot testing programs for forensic AI, predictive analytics, and investigative tools. Meanwhile, Congress has funded a new grant program. It helps state and local police departments conduct their own AI field tests, addressing resource gaps. Transparency remains a sticking point. Some testing reports have been posted by DHS, but a universal mandate for public disclosure has yet to be adopted.
ORIGINAL NEWS STORY:
U.S. AI Advisory Committee, With NIST Support, Endorses New Guidelines for Law Enforcement AI Testing
The U.S. National Artificial Intelligence Advisory Committee (NAIAC), supported by the National Institute of Standards and Technology (NIST), has voted to endorse findings presented in a 24-page document by its Law Enforcement Subcommittee, which provides key recommendations for the field testing of AI tools in law enforcement. This decision follows the subcommittee’s approval of three pivotal recommendations earlier this summer. These recommendations are set to standardize how AI tools are tested in the field, bringing more transparency, accountability, and consistency to the adoption of AI technology by federal law enforcement agencies.
The recommendations are designed to address the growing use of AI technologies in law enforcement, amid concerns about their effectiveness, fairness, and potential biases. As AI becomes more prevalent in law enforcement, its application has raised questions about accountability and transparency, especially in terms of its field testing and eventual deployment.
Standardizing How Law Enforcement Tests AI
The first recommendation calls for the Office of Management and Budget (OMB) to require federal law enforcement agencies to follow a standardized testing checklist before deploying AI systems. This checklist would document each AI tool’s purpose, limitations, and methods used for testing. It also includes a questionnaire to identify metrics for measuring accuracy, fairness, and performance. By adopting a structured approach, agencies can better evaluate whether AI tools meet safety and ethical standards. This process reduces bias risks and ensures that AI systems serve the public interest while maintaining accountability.
Pushing for Greater Transparency
The second recommendation urges OMB to require federal law enforcement agencies to make AI field testing plans and results public—even if the tools are not ultimately adopted. Transparency, the committee argues, is vital for maintaining public trust. Publishing testing data allows independent experts and the public to evaluate how AI systems perform in real-world conditions and whether they align with civil liberties and fairness principles. The subcommittee believes that public disclosure can deter misuse and build confidence in law enforcement’s use of emerging technology. Sharing testing outcomes also helps agencies learn from one another, strengthening AI oversight nationwide.
Expanding Access for State and Local Agencies
The third recommendation focuses on supporting smaller law enforcement agencies that want to test AI tools but lack the resources to do so. The subcommittee proposes dedicated federal funding and research support to help state and local departments conduct their own field testing. These funds would allow smaller jurisdictions to assess AI systems for reliability and fairness before deployment. By extending these opportunities beyond federal agencies, the committee aims to create a more equitable system where every department—large or small—can adopt AI responsibly. This collaborative approach could raise national standards and promote safer, more effective law enforcement practices.
A Framework for Responsible AI in Policing
Together, these recommendations lay the groundwork for standardized, transparent, and equitable AI adoption across the U.S. law enforcement landscape. They reflect growing recognition that testing protocols must evolve alongside technology, ensuring AI systems are both effective and ethical. As AI continues to play a larger role in investigations and public safety, NAIAC’s guidance—with NIST’s technical expertise—marks a pivotal step toward national oversight and consistent accountability.
Need Help?
If you’re navigating new U.S. or international AI regulations, contact BABL AI. Their Audit Experts can guide your organization through emerging compliance requirements and responsible AI implementation.


