BABL AI: Conducting third-party audits for automated employment decision tools

A case study that is focused on auditing AI systems used for making employment decisions in the human resources (HR) sector, with a primary focus on hiring and promotion decisions. 

Link to the full article is here.

The Current State of AI Governance

Our interdisciplinary team at the Algorithmic Bias Lab has produced one of the very first comprehensive reports on the current state of organizational AI governance. The report, partly funded by Notre Dame-IBM Technology Ethics Lab, is a result of a yearlong study which utilized surveys, interviews, and a literature review to examine the internal governance landscape. We asked what governance tools are being used across sectors, are they working, and if so, why?

Our analysis found that significantly less than half of all organizations that use or develop AI have any formal or substantial governance structures for AI. Among those that do have AI governance structures, there is a variety of governance tools being used and a variety of reasons for adopting them. The organizations that do have some governance structures are, in almost all cases, past the stage of building AI governance frameworks, but have not yet developed metrics to assess their effectiveness. On average, organizations that have built AI governance structures are generally at the beginning of the implementation stage. 

 

Among those organizations that do have some governance structures, there are some key trends emerging regarding implementation strategies and challenges. These include, for example: the need for repositories and inventories, the importance of risk assessments, difficulties finding employees with the right skills, lack of external stakeholder engagement, importance of organizational culture for uptake of AI governance initiatives, lack of clear metrics, and others.

This report is the first in an ongoing project to track and measure the effectiveness of AI governance tools across industries. The hope is that the results of our analysis can help guide decision makers, many of whom are propping up nascent AI governance structures.

Citation: Davidovic, Jovana, Shea Brown, Ali Hasan, Khoa Lam, Ben Lange, and Mitt Regan, The Current State of AI Governance. Iowa City, IA: BABL AI, 2023. https://babl.ai/wp-content/uploads/2023/03/AI-Governance-Report.pdf

Algorithm Auditing Framework

What is an “algorithm audit”? Is it the same for every algorithm in every context? What metrics are important, and how do you connect these metrics to the interests of real people?

Three years ago when Jovana Davidovic, Ali Hasan, and Shea Brown were confronted with the prospect of doing one of these “audits”, the answers to these questions were not exactly clear. There were a lot of principles, ideas, and proposals, but no conceptual framework that we could directly translate into practice that was not effectively ad-hoc (then at least). So we came up with one, and have been quietly stress-testing this framework ever since.

Three years later these are still open questions and more relevant than ever, and here are our initial (and very incomplete) thoughts on the matter, published in Big Data & Society.

The short answers are: 1) there’s still a lot of work to be done; 2) put people first; 3) power matters, and risk is always higher in vulnerable communities; 4) context matters, a lot; 5) algorithmic bias is super important, but it’s not the only way to harm people with algorithms.