BABL AI Testifies on NYC Bias Law

The City of New York passed Local Law 144, which requires yearly “bias audits” of automated employment decision tools (AEDTs). The NYC Department of Consumer and Worker Protection held a public hearing today to accept testimony on the law prior to enacting new rules for penalties associated with violations.

BABL AI CEO Shea Brown offered written and oral testimony during this hearing to try and clarify some of the ambiguities in the law that make compliance challenging for both employers and vendors. Look for updates as we engage further with the Department on these issues.

Algorithm Auditing Framework

What is an “algorithm audit”? Is it the same for every algorithm in every context? What metrics are important, and how do you connect these metrics to the interests of real people?

Three years ago when Jovana Davidovic, Ali Hasan, and I were confronted with the prospect of doing one of these “audits”, the answers to these questions were not exactly clear. There were a lot of principles, ideas, and proposals, but no conceptual framework that we could directly translate into practice that was not effectively ad-hoc (then at least). So we came up with one, and have been quietly stress-testing this framework ever since.

Three years later these are still open questions and more relevant than ever, and here are our initial (and very incomplete) thoughts on the matter, published in Big Data & Society.

The short answers are: 1) there’s still a lot of work to be done; 2) put people first; 3) power matters, and risk is always higher in vulnerable communities; 4) context matters, a lot; 5) algorithmic bias is super important, but it’s not the only way to harm people with algorithms.