
Who We Are
We are BABL AI, a boutique consulting and audit firm focused on responsible AI. We believe that algorithms should be developed, deployed, and governed in ways that prioritize human flourishing.
We unlock the value of responsible AI for clients by combining leading research expertise and extensive practitioner experience in AI and organizational ethics to drive impactful change at the frontier of technology and emerging standards. Our team consists of leading experts and practitioners on AI, Ethics, Law, and Machine Learning
What We Do
BABL AI has been at the forefront of developing proven methodologies, standards, and best practices since 2018. Our services are focused on four key areas.

Algorithm Process Audits
We independently verify and impartially evaluate your organization’s bias testing through our criteria-based process audit based on clearly defined audit criteria and certified audit standards.

Algorithm Risk, Impact, and Bias Assessments
We identify and evaluate your organization’s algorithms’ risks and impacts through an in-depth assessment involving our proprietary methodology, stakeholder interviews, and direct technical testing.

Responsible AI Governance Gap Analysis
We assess your current Responsible AI practices and governance in relation to current (and emerging) standards, regulations, and best practices.

Corporate Training in Responsible AI
We develop and deliver training and compliance courses in Responsible AI to fit your corporate upskilling needs.
Scoping phase for the audit
Core phase of the process audit
At the end of the audit phase, the auditors reach an overall audit opinion that determines the result of the audit. This opinion can be a “Pass,” “Minor Remediations,” or “Fail” result.
Public Summary Drafting

NYC Bias Audit Law
New York City’s Bias Audit Law for Automated Employment Decision Tools (AEDTs)
As HR and recruitment technology has started to incorporate AI and machine learning, various stakeholders are demanding assurance that vendors and employers have done their due diligence to mitigate potential bias and other ethical, privacy, and compliance risks associated with the use of their proprietary AI.
Starting January 2023, employers and employment agencies are required by New York City’s Local Law No. 144 of 2021 to have available “independent bias audits” for automated 4 tools used to substantially assist or replace decisions in hiring or promotion – i.e., automated employment decision tools (AEDTs). Read our one-pager for 5 key points about the “bias audit,” or watch our free webinar to learn more about the details of the law.
BABL AI’s Process Audit is a streamlined and targeted solution for our clients that satisfies the requirement of a bias audit, defined as “an impartial evaluation by an independent auditor.” Developed from our experience in performing direct testing for algorithmic bias, and emerging best practices and standards in responsible AI, this set of transparent, binary audit criteria allows employers and vendors using AEDTs to conduct their own bias testing that trained and certified BABL AI independent auditors can verify.
In addition to NYC Bias Law Audits, we are working closely with our clients and partners to provide similar cost-effective and efficient solutions for upcoming regulation including the Proposed Amendments to California’s Employment Legislation Regarding Automated-Decision Systems and EU AI Artificial Intelligence Act.
Pre Audit
Scoping phase for the audit
As HR and recruitment technology has started to incorporate AI and machine learning, various stakeholders are demanding assurance that vendors and employers have done their due diligence to mitigate potential bias and other ethical, privacy, and compliance risks associated with the use of their proprietary AI.
Starting January 2023, employers and employment agencies are required by New York City’s Local Law No. 144 of 2021 to have available “independent bias audits” for automated 4 tools used to substantially assist or replace decisions in hiring or promotion – i.e., automated employment decision tools (AEDTs). Read our one-pager for 5 key points about the “bias audit,” or watch our free webinar to learn more about the details of the law.
BABL AI’s Process Audit is a streamlined and targeted solution for our clients that satisfies the requirement of a bias audit, defined as “an impartial evaluation by an independent auditor.” Developed from our experience in performing direct testing for algorithmic bias, and emerging best practices and standards in responsible AI, this set of transparent, binary audit criteria allows employers and vendors using AEDTs to conduct their own bias testing that trained and certified BABL AI independent auditors can verify.
In addition to NYC Bias Law Audits, we are working closely with our clients and partners to provide similar cost-effective and efficient solutions for upcoming regulation including the Proposed Amendments to California’s Employment Legislation Regarding Automated-Decision Systems and EU AI Artificial Intelligence Act.
Audit
Core phase of the process audit
The auditee submits documentary evidence to the client portal, which our trained and certified auditors review to satisfy audit criteria. During the review process, BABL AI auditors might ask for more supporting documentation or interact with the auditee’s internal and external stakeholders, such as employees or other third-parties, to verify the truth of statements made in the submitted documentation.
At the end of the audit phase, the auditors reach an overall audit opinion that determines the result of the audit. This opinion can be a “Pass,” “Minor Remediations,” or “Fail” result.
Post Audit
Public Summary Drafting
In this final phase, BABL AI drafts a public report for each AEDT, if mandated by the regulatory body, and presents the final deliverable, including the audit opinion, to the auditee.

NYC Bias Audit Law
New York City’s Bias Audit Law for Automated Employment Decision Tools (AEDTs)
As HR and recruitment technology has started to incorporate AI and machine learning, various stakeholders are demanding assurance that vendors and employers have done their due diligence to mitigate potential bias and other ethical, privacy, and compliance risks associated with the use of their proprietary AI.
Starting January 2023, employers and employment agencies are required by New York City’s Local Law No. 144 of 2021 to have available “independent bias audits” for automated 4 tools used to substantially assist or replace decisions in hiring or promotion – i.e., automated employment decision tools (AEDTs). Read our one-pager for 5 key points about the “bias audit,” or watch our free webinar to learn more about the details of the law.
BABL AI’s Process Audit is a streamlined and targeted solution for our clients that satisfies the requirement of a bias audit, defined as “an impartial evaluation by an independent auditor.” Developed from our experience in performing direct testing for algorithmic bias, and emerging best practices and standards in responsible AI, this set of transparent, binary audit criteria allows employers and vendors using AEDTs to conduct their own bias testing that trained and certified BABL AI independent auditors can verify.
In addition to NYC Bias Law Audits, we are working closely with our clients and partners to provide similar cost-effective and efficient solutions for upcoming regulation including the Proposed Amendments to California’s Employment Legislation Regarding Automated-Decision Systems and EU AI Artificial Intelligence Act.

Global HR Company
Challenge: Global HR company that uses AEDTs requires independent and impartial validation that it manages bias and risks appropriately ahead of upcoming regulation.
Solution: BABL AI delivers as a targeted process audit which verifies appropriate documentation for sufficient technical bias testing for disparate impact, AI governance structures, and ethical risk management has been conducted by the client.
Impact: Client and its stakeholders can confidently meet upcoming regulatory standards.

Silicon Valley Tech Company
Challenge: Major Silicon Valley tech firm is unclear whether their existing AI Governance controls are successfully mitigating potential risks of their high impact and unique AI technology and that the firm lives up to its aspiration of industry leadership in Responsible AI.
Solution: BABL AI conducted an algorithm ethical risk and AI governance assessment to evaluate the state of AI Governance controls relative to both emerging industry standards and the unique risks posed by the client’s AI in their large-scale use con-text.
Impact: Assurance that the client has done its due diligence to mitigate the unique risks of their AI technology, including documenting good faith efforts to mitigate these risks, and an actionable roadmap for living up to the aspired industry leadership

Leading AI EdTech Vendor
Challenge: A leading EdTech vendor comes under intense public and regulatory scru-tiny for potential bias in its core AI product, eroding the trust of clients and initiating costly lawsuits and Senatorial inquiries.
Solution: BABL AI is engaged to help develop a responsible AI strategy and
● Conduct a bias assessment of their core face detection algorithm, with a focus on mitigating potential sources of societal bias in training data;
● Develop and execute an ethical risk and impact assessment, identifying key ethical risks and governance mechanisms for mitigating those risks;
● Implement a data quality and model monitoring program, with demonstrated success measured through continuous improvement and reduced bias in production algorithms;
Impact: Through iterative improvement and transparent bias assessment documenta-tion, the vendor is able to build public trust, address regulatory inquiries with good-faith, and retain critical clients.

Global HR Company
Challenge: Global HR company that uses AEDTs requires independent and impartial validation that it manages bias and risks appropriately ahead of upcoming regulation.
Solution: BABL AI delivers as a targeted process audit which verifies appropriate documentation for sufficient technical bias testing for disparate impact, AI governance structures, and ethical risk management has been conducted by the client.
Impact: Client and its stakeholders can confidently meet upcoming regulatory standards.

Silicon Valley Tech Company
Challenge: Major Silicon Valley tech firm is unclear whether their existing AI Governance controls are successfully mitigating potential risks of their high impact and unique AI technology and that the firm lives up to its aspiration of industry leadership in Responsible AI.
Solution: BABL AI conducted an algorithm ethical risk and AI governance assessment to evaluate the state of AI Governance controls relative to both emerging industry standards and the unique risks posed by the client’s AI in their large-scale use con-text.
Impact: Assurance that the client has done its due diligence to mitigate the unique risks of their AI technology, including documenting good faith efforts to mitigate these risks, and an actionable roadmap for living up to the aspired industry leadership

Leading AI EdTech Vendor
Challenge: A leading EdTech vendor comes under intense public and regulatory scru-tiny for potential bias in its core AI product, eroding the trust of clients and initiating costly lawsuits and Senatorial inquiries.
Solution: BABL AI is engaged to help develop a responsible AI strategy and
● Conduct a bias assessment of their core face detection algorithm, with a focus on mitigating potential sources of societal bias in training data;
● Develop and execute an ethical risk and impact assessment, identifying key ethical risks and governance mechanisms for mitigating those risks;
● Implement a data quality and model monitoring program, with demonstrated success measured through continuous improvement and reduced bias in production algorithms;
Impact: Through iterative improvement and transparent bias assessment documenta-tion, the vendor is able to build public trust, address regulatory inquiries with good-faith, and retain critical clients.

Global HR Company
Challenge: Global HR company that uses AEDTs requires independent and impartial validation that it manages bias and risks appropriately ahead of upcoming regulation.

Silicon Valley Tech Company
Challenge: Major Silicon Valley tech firm is unclear whether their existing AI Governance controls are successfully mitigating potential risks of their high impact and unique AI technology and that the firm lives up to its aspiration of industry leadership in Responsible AI.

Leading AI EdTech Vendor
Challenge: A leading EdTech vendor comes under intense public and regulatory scru-tiny for potential bias in its core AI product, eroding the trust of clients and initiating costly lawsuits and Senatorial inquiries.
News
Founder and CEO of BABL AI to kick-off inaugural D.C. conference on AI
Founder and CEO of BABL AI to kick-off inaugural D.C. conference on AI The Founder and CEO of BABL AI will be speaking at …
Virginia Executive Directive Number Five
Virginia Executive Directive Number Five As lawmakers in …
Founder and CEO of BABL AI to kick-off inaugural D.C. conference on AI
Founder and CEO of BABL AI to kick-off inaugural D.C. conference on AI The Founder and CEO of BABL AI will be speaking at the …
What is the EU AI Act?
What is the EU AI Act? The European Union is once again leading the way in digital regulation with its latest piece of legislation, the …
What is the Digital Services Act?
What is the Digital Services Act? As the European Union works on the final touches of its AI regulation legislation, the Harmonised Rules on Artificial …
Virginia Executive Directive Number Five
Virginia Executive Directive Number Five As lawmakers in Washington D.C. go back and forth on potential AI regulations, one stateside Governor has issued an executive …
Clients & Partners




