BABL AI: A Framework for Assurance Audits of Algorithmic Systems

Written by Jeremy Werner

Jeremy is an experienced journalists, skilled communicator, and constant learner with a passion for storytelling and a track record of crafting compelling narratives. He has a diverse background in broadcast journalism, AI, public relations, data science, and social media management.
Posted on 01/29/2024
In Research

BABL AI continues to be a leader when it comes to AI system audits, and our team of audit experts recently highlighted their deep knowledge and expertise in an academic paper. In “A Framework for Assurance Audits of Algorithmic Systems,” the BABL AI research team led by Chief Product Officer Khoa Lam, Senior Advisor Benjamin Lange, and Senior Consultant Borhane Blili-Hamelin discusses an AI assurance audit framework called the “criterion audit,” which they model after financial auditing practices. The research was also co-authored and supported largely by CEO Shea Brown, Chief Ethics Officer Jovana Davidovic, and Senior Advisor Ali Hasan. This work is BABL AI’s basis for their audit methodology for NYC AI Bias Audit Law.

 

In the paper, BABL AI experts note the increasing regulatory focus on AI systems, and emphasize the need for transparent and accountable practices. However, current regulations lack agreed-upon standards for compliance and assurance audits. To address this gap, experts propose the criterion audit framework, modeled after financial auditing practices. This framework aims to assure stakeholders of AI organizations’ ability to govern algorithms responsibly and uphold human values. We outline the conditions and procedures for conducting criterion audits, illustrating their application to bias audits for hiring algorithms, as mandated by the NYC AI Bias Audit Law

You can read the full article HERE.

Subscribe to our Newsletter

Keep up with the latest on BABL AI, AI Auditing and
AI Governance News by subscribing to our news letter