U.S. AI Advisory Committee, With NIST Support, Endorses New Guidelines for Law Enforcement AI Testing

Written by Jeremy Werner

Jeremy is an experienced journalists, skilled communicator, and constant learner with a passion for storytelling and a track record of crafting compelling narratives. He has a diverse background in broadcast journalism, AI, public relations, data science, and social media management.
Posted on 09/13/2024
In News

The U.S. National Artificial Intelligence Advisory Committee (NAIAC), supported by the National Institute of Standards and Technology (NIST), has voted to endorse findings presented in a 24-page document by its Law Enforcement Subcommittee, which provides key recommendations for the field testing of AI tools in law enforcement. This decision follows the subcommittee’s approval of three pivotal recommendations earlier this summer. These recommendations are set to standardize how AI tools are tested in the field, bringing more transparency, accountability, and consistency to the adoption of AI technology by federal law enforcement agencies.

 

The recommendations are designed to address the growing use of AI technologies in law enforcement, amid concerns about their effectiveness, fairness, and potential biases. As AI becomes more prevalent in law enforcement, its application has raised questions about accountability and transparency, especially in terms of its field testing and eventual deployment. 

 

The first recommendation from the subcommittee is for the Office of Management and Budget (OMB) to push federal law enforcement agencies to follow a standardized checklist when testing AI tools in the field. The checklist is intended to provide a clear framework for documenting and assessing these technologies, ensuring that they meet rigorous standards before implementation.

 

The proposed checklist outlines several essential steps for agencies to follow. These include clearly describing the AI tool and its intended purpose, documenting any use limitations, conducting a thorough AI impact assessment, and identifying the testing methods to be used. Moreover, the checklist suggests using a detailed questionnaire to brainstorm and identify relevant metrics that will be used to evaluate the effectiveness and safety of the AI system in question.

 

By following this checklist, federal law enforcement agencies can ensure that AI tools are tested thoroughly and transparently, minimizing the potential for bias or misuse while maximizing the benefits of the technology. The committee emphasized the need for these AI systems to be both fair and effective, especially in high-stakes environments like law enforcement.

 

The second recommendation from the Law Enforcement Subcommittee is a call for OMB to revise its guidance and require federal law enforcement agencies to make their AI field testing plans and results public. This would apply regardless of whether the AI tool is ultimately adopted by the agency.

 

The push for greater transparency is rooted in the belief that public scrutiny is essential for ensuring accountability when it comes to new technologies. By making field testing plans and results publicly available, the subcommittee believes that law enforcement agencies can foster greater trust with the public. Transparency will allow for external evaluations, ensuring that the public has access to data on how AI systems are tested, their potential impact, and any concerns or challenges that arise during testing.

 

The subcommittee’s proposal emphasizes that public disclosure will help prevent potential misuse of AI tools while also allowing the public to see how law enforcement is evolving with emerging technologies. Ultimately, the goal is to provide a clearer picture of how AI can help law enforcement operate more effectively, while also safeguarding civil liberties.

 

In addition to its recommendations for federal law enforcement, the Law Enforcement Subcommittee also made a third recommendation, which focuses on providing funding and research support for state and local law enforcement agencies to conduct their own AI field testing. Many state and local law enforcement departments are interested in using AI but lack the resources to conduct thorough testing before adopting these technologies. 

 

This recommendation seeks to level the playing field, enabling smaller law enforcement agencies to benefit from the same rigorous testing standards as their federal counterparts. The subcommittee proposes increased federal funding to support these efforts, allowing state and local agencies to participate in cutting-edge research and testing that could improve law enforcement practices across the country.

 

By extending these resources to smaller agencies, the committee hopes to create a more equitable landscape where all law enforcement agencies—regardless of size—have access to AI tools that have been properly vetted and tested.

 

 

Need Help?

 

If you’re concerned or have questions about how to navigate the U.S. or global AI regulatory landscape, don’t hesitate to reach out to BABL AI. Their Audit Experts can offer valuable insight and ensure you’re informed and compliant.

 

Subscribe to our Newsletter

Keep up with the latest on BABL AI, AI Auditing and
AI Governance News by subscribing to our news letter