Skip to content

AI Bias Audit Law

New York City’s Bias Audit Law for Automated Employment Decision Tools (AEDTs)

As HR and recruitment technology has started to incorporate AI and machine learning, various stakeholders are demanding assurance that vendors and employers have done their due diligence to mitigate potential bias and other ethical, privacy, and compliance risks associated with the use of their proprietary AI.

Starting January 2023, employers and employment agencies are required by New York City’s Local Law No. 144 of 2021 to have available “independent bias audits” for automated 4 tools used to substantially assist or replace decisions in hiring or promotion – i.e., automated employment decision tools (AEDTs). Read our one-pager for 5 key points about the “bias audit,” or watch our free webinar to learn more about the details of the law.

BABL AI’s Process Audit is a streamlined and targeted solution for our clients that satisfies the requirement of a bias audit, defined as “an impartial evaluation by an independent auditor.” Developed from our experience in performing direct testing for algorithmic bias, and emerging best practices and standards in responsible AI, this set of transparent, binary audit criteria allows employers and vendors using AEDTs to conduct their own bias testing that trained and certified BABL AI independent auditors can verify.

In addition to NYC Bias Law Audits, we are working closely with our clients and partners to provide similar cost-effective and efficient solutions for upcoming regulation including the Proposed Amendments to California’s Employment Legislation Regarding Automated-Decision Systems and EU AI Artificial Intelligence Act.

Advantages of the Process Audit

Integrity

We hold ourselves to the highest standards of integrity. Our auditors are ForHumanity Certified Auditors under NYC AEDT Bias Audit. In addition, our auditors follow ForHumanity’s Code of Ethics, PCAOB AS 1105 for audit evidence, and ISAE 3000 for assurance engagements (where applicable)

Transparency

Our audit criteria are publicly available and published as part of the required public summary of audit results. This shows what we test for and why in an open and honest manner

Efficiency

The process audit does not require integration into your technical workflow. Our method simply asks that your organization keep detailed and verifiable documentation of the development, testing, and/or use of your algorithms so that it may be verified and evaluated by our auditors.
Pre-audit
Scoping phase for the audit
BABL AI walks the prospective auditee through a series of questions to determine if sufficient testing of tools has been conducted to qualify for an audit. In cases where more preparation is required, we provide guidance for the prospective auditee. Once these criteria are met, then the auditee is on-boarded onto a client portal for progression to the actual audit phase.
Pre-audit
Audit
Core phase of the process audit
The auditee submits documentary evidence to the client portal, which our trained and certified auditors review to satisfy audit criteria. During the review process, BABL AI auditors might ask for more supporting documentation or interact with the auditee’s internal and external stakeholders, such as employees or other third-parties, to verify the truth of statements made in the submitted documentation.

At the end of the audit phase, the auditors reach an overall audit opinion that determines the result of the audit. This opinion can be a “Pass,” “Minor Remediations,” or “Fail” result.
Audit
Post-audit
Public Summary Drafting
In this final phase, BABL AI drafts a public report for each AEDT, if mandated by the regulatory body, and presents the final deliverable, including the audit opinion, to the auditee.
Post-audit

Pre Audit

Scoping phase for the audit

As HR and recruitment technology has started to incorporate AI and machine learning, various stakeholders are demanding assurance that vendors and employers have done their due diligence to mitigate potential bias and other ethical, privacy, and compliance risks associated with the use of their proprietary AI.

Starting January 2023, employers and employment agencies are required by New York City’s Local Law No. 144 of 2021 to have available “independent bias audits” for automated 4 tools used to substantially assist or replace decisions in hiring or promotion – i.e., automated employment decision tools (AEDTs). Read our one-pager for 5 key points about the “bias audit,” or watch our free webinar to learn more about the details of the law.

BABL AI’s Process Audit is a streamlined and targeted solution for our clients that satisfies the requirement of a bias audit, defined as “an impartial evaluation by an independent auditor.” Developed from our experience in performing direct testing for algorithmic bias, and emerging best practices and standards in responsible AI, this set of transparent, binary audit criteria allows employers and vendors using AEDTs to conduct their own bias testing that trained and certified BABL AI independent auditors can verify.

In addition to NYC Bias Law Audits, we are working closely with our clients and partners to provide similar cost-effective and efficient solutions for upcoming regulation including the Proposed Amendments to California’s Employment Legislation Regarding Automated-Decision Systems and EU AI Artificial Intelligence Act.

Audit

Core phase of the process audit

The auditee submits documentary evidence to the client portal, which our trained and certified auditors review to satisfy audit criteria. During the review process, BABL AI auditors might ask for more supporting documentation or interact with the auditee’s internal and external stakeholders, such as employees or other third-parties, to verify the truth of statements made in the submitted documentation.

At the end of the audit phase, the auditors reach an overall audit opinion that determines the result of the audit. This opinion can be a “Pass,” “Minor Remediations,” or “Fail” result.

Post Audit

Public Summary Drafting

In this final phase, BABL AI drafts a public report for each AEDT, if mandated by the regulatory body, and presents the final deliverable, including the audit opinion, to the auditee.

FAQ

BABL AI serves as independent, third-party auditors to certify that the auditee has performed sufficient testing on its AEDT to “assess the tool’s disparate impact on persons of any component 1 category,” which is the minimal requirement for a bias audit under NYC’s Local Law 144 of 2021.

We have developed our audit based on best practices and normative guidance from the financial audit/assurance community, as well as from emerging standards in the algorithm (AI) auditing ecosystem. 

We disclose the information required by law, as well as additional context necessary to understand the audit opinion and summery of results. We also disclose all of our audit criteria, and the details of our methodology to provide maximum transparency without divulging important intellectual property of the audited organization. 

Algorithm Risk & Impact Assessment

Our Risk, Impact and Bias Assessments are a suite of in-depth assessments designed to identify and evaluate various relevant risks of your algorithm as a socio-technical system.
This advisory service is typically provided in a dual-pronged approach: (1) a risk or impact assessment complemented by (2) a bias assessment. Read our peer-reviewed article to learn more about our methodology.

Our risk or impact assessment identifies ethical, compliance, safety, liability, and reputational risks. The output of this workstream is directly fed into the bias assessment to identify what it should be testing for and how best to go about it. A risk or impact assessment typically involves:
1. Internal or external stakeholder interviews,
2. Manual examination of your algorithm’s UI/UX, and
3. Systems review through internal documentation and policies.

A direct testing of your algorithm, our bias assessment aims to quantify the extent of potential bias of your algorithm. In some cases, the output of this workstream is fed back into the risk assessment for evaluation of bias risks.
1. If you have access to appropriate testing data, our technical team can work with your internal team to design and perform the testing in-house, or
2. If no testing has been done, we may gather testing data to make our bias assessments as deemed appropriate through our risk assessment.

Separate reports that are suitable for sharing with your internal or external stakeholders are issued for each assessment, outlining:
1. Our methodologies,
2. Identified and assessed potential risks or impacts,
3. Technical testing results, and
4. Bespoke recommendations for risk and bias mitigation

Setting the Standard

BABL has been on the forefront of developing standards and best practices in the field of Responsible AI, including developing an Ethical Algorithm Assessment framework, partnering with the non-profit ForHumanity to define audit standards for AI governance, and advising the DoD on AI and national security.

Our products and services sit at the current frontier of Responsible AI & Ethics Consulting

Responsible AI Governance Gap Analysis

This assessment of your organization’s current Responsible AI practices and governance is evaluated against current and emerging standards, regulations, and best practices. In addition, we provide detailed recommendations for what your organization should prioritize to build a successful responsible AI governance program, in areas such as:

01

Principles & Values

08

Internal Reporting

07

Stakeholder Engagement

06

Internal & External Oversight

02

Policies & Procedures

03

Education & Training

04

Monitoring

05

Risk Mitigation

01

Principles & Values

02

Policies & Procedures

03

Education & Training

04

Monitoring

05

Risk Mitigation

06

Internal & External Oversight

07

Stakeholder Engagement

08

Internal Reporting

Corporate Training in Responsible AI

This involves developing and delivering training and compliance courses in Responsible AI to fit corporate upskilling needs.

Example areas may include conducting technical bias audits or ethical risk analyses based on proprietaty BABL methodologies.

Want to learn more?

Let’s discuss how we can help your organization ensure that your AI and machine learning algorithms are fair and AI governance processes are robust.