Algorithm Auditing Framework

Written by Shea Brown

Posted on 02/01/2021
In Research

What is an “algorithm audit”? Is it the same for every algorithm in every context? What metrics are important, and how do you connect these metrics to the interests of real people?

Three years ago when Jovana Davidovic, Ali Hasan, and Shea Brown were confronted with the prospect of doing one of these “audits”, the answers to these questions were not exactly clear. There were a lot of principles, ideas, and proposals, but no conceptual framework that we could directly translate into practice that was not effectively ad-hoc (then at least). So we came up with one, and have been quietly stress-testing this framework ever since.

Three years later these are still open questions and more relevant than ever, and here are our initial (and very incomplete) thoughts on the matter, published in Big Data & Society.

The short answers are: 1) there’s still a lot of work to be done; 2) put people first; 3) power matters, and risk is always higher in vulnerable communities; 4) context matters, a lot; 5) algorithmic bias is super important, but it’s not the only way to harm people with algorithms.


Read The Paper

Subscribe to our Newsletter

Keep up with the latest on BABL AI, AI Auditing and
AI Governance News by subscribing to our news letter