Our Audits
for compliance with a single or multiple global AI regulations, or just want to ensure
to your stakeholders your AI systems are safe, BABL AI has you covered.
Our Audits ensure global AI legal compliance & assurance. Whether you’re striving for compliance with a single or multiple global AI regulations, or just want to ensure to your stakeholders your AI systems are safe, BABL AI has you covered.
EU AI Act Conformity Assessment Readiness Audit
How our Audit works
Scoping
BABL AI will conduct a comprehensive assessment of your organization's existing AI system policies and procedures to evaluate their alignment with EU AI Act requirements. Our expert team then formulates a strategic action plan to assist you in achieving compliance. NO SOFTWARE DOWNLOADED OR PLATFORM INTEGRATIONS REQUIRED!
Fieldwork
As your dedicated internal audit partner, BABL AI will collaborate closely with your organization to identify the required capabilities outlined in the regulation, ensuring you can seamlessly prepare yourself for the conformity assessment. Throughout this journey, our team may need to engage with personnel at different levels within your organization to identify the necessary policies and procedures mandated by the regulation.
Final Report
BABL AI will generate a concise summary report, offering insights into your current compliance status and delivering valuable recommendations and corrective actions to facilitate your ongoing compliance efforts.
FAQs
How to get a Conformity Assessment for the EU AI Act?
Currently the rules for the EU AI Act are not finalized and no organization can obtain a Conformity Assessment. However, many of the required elements of a these Conformity Assessments are publicly available and BABL AI can help organizations prepare for a Conformity Assessment by preforming an EU AI Act Conformity Assessment Preparation Audit.
What is the EU AI Act?
The EU Artificial Intelligence Act proposes a comprehensive framework for categorizing and overseeing artificial intelligence applications according to their potential for harm. This framework predominantly comprises three key classifications: prohibited practices, high-risk AI systems, and other AI applications. High-risk AI systems, as defined by the Act, encompass those presenting substantial risks to public health, safety, or fundamental human rights. Prior to market release, they necessitate a mandatory compliance evaluation, which the provider must perform through self-assessment.
What is a Conformity Assessment?
The process of conformity assessment demonstrates to the public that an AI System meets the relevant requirements of the EU AI Act.
Is this audit a Conformity Assessment?
At the moment the final rules for a Conformity Assessment have not been finalized. This audit process is to help your organization be fully prepared for a Conformity Assessment when the rules have been finalized.
What is a “high-risk” AI system?
As defined in the EU AI Act, high-risk systems are those that present substantial risks to public health, safety, or fundamental human rights. These systems include but not limited to:
- Biometric identification and categorization of natural persons
- Management and operation of critical infrastructure
- Education and vocational training
- Employment, workers management, and access to self-employment
- Access to and enjoyment of essential private service, and public services and benefits
- Law enforcement
- Migration, asylum, and border control management
- Administration of justice and democratic processes
Who does the EU AI Act apply to?
This regulation is applicable to AI system stakeholders whose products are available in the EU market or impact individuals within the EU, irrespective of their organizational type, whether public or private, and whether they operate within or outside the EU. The regulation outlines specific obligations for these stakeholders:
- Providers: These are the ones who create an AI system or have someone make it for them. They sell it with their own name or brand.
- Importers: These are the ones who bring AI systems from outside the EU and sell them in the EU using the original name or brand.
- Distributors: These are the ones who sell AI systems in the EU market without changing them in any way. They are not the creators or importers.
- Users: These are the people or organizations that use AI systems, but not for personal, non-professional use.
Each role has its own responsibilities under the regulation.
Note: Providers, such as the developer of a candidate screening tool, bear different obligations from users, such as a bank procuring this screening tool.
Which obligations do AI system providers have under the EU AI Act?
Legals requirements pertaining to providers include areas such as:
- Quality management system
- Risk management system
- Data governance
- Technical documentation and transparency
- Human oversight
- Post-market monitoring system, among others
NYC Bias Audit
How our Audit works
Pre Audit Scoping
BABL AI walks your team through a series of questions to determine where you currently stand. We then inform your team what documentation is needed for our audit process, and help you understand everything for compliance, including how to test the various selection rates and impact ratios of protected categories as required by the law. NO SOFTWARE DOWNLOADED OR PLATFORM INTEGRATIONS REQUIRED!
Audit Review
Once all your documentary evidence has been submitted for evaluation, our well trained certified auditors review your documentation against our criteria. During this review process, BABL AI auditors might ask for more supporting documentation or interact with your internal and external stakeholders, such as employees or other third-parties, to verify the truth of statements made in the submitted documentation. At the end of the audit review, the auditors reach an overall audit opinion that determines the result of the audit.
Public Summary
Once an opinion is determined, BABL AI will draft a public summary report for each AEDT, if mandated by the regulatory body, and present the final report to your team.
FAQs
What is the New York City Bias Audit Law (Local Law 144)?
The New York City Bias Audit Law (Local Law 144) governs the use of automated hiring tools (AEDT) in New York City. It mandates that algorithm-based recruiting, hiring, or promotion technologies must undergo an annual bias audit from an independent third-party auditor before they can be used for candidates and employees.
What is an AEDT?
AEDT stands for "Automated Employment Decision Tool." It refers to software or systems that use algorithms and automation to assist in various aspects of the employment process, such as recruiting, hiring, or promoting employees. AEDTs can be used for tasks like screening resumes, conducting initial candidate assessments, or identifying potential candidates for job openings.
Who is affected by the NYC Bias Audit Law?
New York City Local Law 144, also known as the Bias Audit Law, affects employers of all sizes within New York City that use automated employment decision tools (AEDTs) for recruiting, hiring, or promoting candidates and employees.
What are some examples of an AEDT?
The NYC Bias Law states that an AEDT is “any computational process, derived from machine learning, statistical modeling, data analytics, or artificial intelligence, that issues simplified output, including a score, classification, or recommendation, that is used to substantially assist or replace discretionary decision making for making employment decisions that impact natural persons.”
This includes but is not limited to: Applicant Tracking Systems (ATS), Pre-employment Assessment Software, Chatbots for Initial Screening, Resume Screening Algorithm, Video Interview Platforms.
What information is disclosed on the audit public summary?
We transparently provide all legally required information, along with essential context to comprehend the audit opinion and summary of results. Furthermore, we openly disclose our audit criteria and methodology, offering maximal transparency while safeguarding the intellectual property of the audited organization.
How long does the audit process take?
On average our audits take between 2-3 weeks to complete after all relevant documentation has been submitted. This timeframe can vary depending on many circumstances like readiness and resubmission of unsatisfactory documentation, but BABL AI will help you at every stage to ensure an easy process.
Why should I get an Algorithmic Audit?
In addition to ensuring compliance with emerging regulations such as New York City Local Law 144, an algorithmic audit serves as a powerful tool for reassuring your stakeholders that your algorithms undergo vigilant bias monitoring and testing. This proactive approach helps mitigate potential risks posed by your systems while significantly enhancing transparency, ultimately fostering greater trust in your AI systems.
Can you do multiple audits at the same time?
Absolutely! Our audit process is highly adaptable to cater to your unique compliance and assurance requirements. Feel free to get in touch with us and have a conversation with one of our expert auditors to discover the full spectrum of solutions BABL AI can offer.
ISO/IEC 42001 Certification - Artificial Intelligence Management System
Mitigate AI risk and navigate regulations with ISO 42001 Certification. BABL AI’s auditors certifies your AI Management System against ISO 42001, the first globally recognized, international management system standard specifically designed for artificial intelligence.
How our Audit works
Stage 1 Audit
BABL AI will assess your organization’s current standing toward compliance with ISO 42001. We review the documentation of your AI Management System, focusing on areas like risk management, governance policies, and internal procedures. Based on the documentation review and interviews, we identify any gaps or areas of non-conformance that need to be addressed before the Stage 2 audit.
Stage 2 Audit
Before beginning Stage 2, you'll address findings from the Stage 1 audit. Corrective actions might include revising documents, implementing new controls, or additional training. Successful completion leads to our Stage 2 Audit, an in-depth assessment involving interviews, documentation reviews, and observations to identify any non-conformities with ISO 42001 requirements.
Certification
Once done, BABL AI's audit team will present their findings and issue a certification. Just like our other audits, the ISO 42001 Certification process is designed to be straightforward and transparent, with no need for software downloads or complex integrations.
FAQs
What is the ISO 42001 Certification - Artificial Intelligence Management System?
ISO 42001 is an international standard that provides guidelines for establishing an AI management system (AIMS) within organizations. It specifies requirements and provides guidance for responsibly developing, implementing, and using artificial intelligence (AI) systems.
Who should use the ISO 42001?
ISO 42001 is intended for use by any organization that develops, provides, or uses artificial intelligence (AI) systems. The standard is applicable across all industries and organization sizes.
How do I implement the ISO 42001 in my organization?
To implement ISO 42001, first conduct a gap analysis against the standard's requirements. Then develop an AI management system framework with policies, objectives, processes and governance for managing AI systems responsibly. Perform risk and impact assessments to identify and mitigate potential hazards. Integrate ethical AI practices like fairness and transparency. Establish robust data governance, security controls, and decision transparency. Provide training to employees and set up feedback channels. Once all elements are in place, undergo third-party certification auditing. Finally, continuously monitor performance, identify areas for improvement, and make adjustments to maintain compliance with the ISO 42001 standard.
What should my organization do if we experience challenges in applying the ISO 42001?
BABL AI can help your organization apply the ISO42001 Certification - Artificial Intelligence Management System with our experienced Audit experts.
NIST AI Risk Management Framework Readiness Audit
How our Audit works
Stage 1 Audit
BABL AI will assess your organization’s current standing toward compliance with ISO 42001. We review the documentation of your AI Management System, focusing on areas like risk management, governance policies, and internal procedures. Based on the documentation review and interviews, we identify any gaps or areas of non-conformance that need to be addressed before the Stage 2 audit.
Stage 2 Audit
Before beginning Stage 2, you'll address findings from the Stage 1 audit. Corrective actions might include revising documents, implementing new controls, or additional training. Successful completion leads to our Stage 2 Audit, an in-depth assessment involving interviews, documentation reviews, and observations to identify any non-conformities with ISO 42001 requirements.
Certification
Once done, BABL AI's audit team will present their findings and issue a certification. Just like our other audits, the ISO 42001 Certification process is designed to be straightforward and transparent, with no need for software downloads or complex integrations.
FAQs
What is the ISO 42001 Certification - Artificial Intelligence Management System?
ISO 42001 is an international standard that provides guidelines for establishing an AI management system (AIMS) within organizations. It specifies requirements and provides guidance for responsibly developing, implementing, and using artificial intelligence (AI) systems.
Who should use the ISO 42001?
ISO 42001 is intended for use by any organization that develops, provides, or uses artificial intelligence (AI) systems. The standard is applicable across all industries and organization sizes.
How do I implement the ISO 42001 in my organization?
To implement ISO 42001, first conduct a gap analysis against the standard's requirements. Then develop an AI management system framework with policies, objectives, processes and governance for managing AI systems responsibly. Perform risk and impact assessments to identify and mitigate potential hazards. Integrate ethical AI practices like fairness and transparency. Establish robust data governance, security controls, and decision transparency. Provide training to employees and set up feedback channels. Once all elements are in place, undergo third-party certification auditing. Finally, continuously monitor performance, identify areas for improvement, and make adjustments to maintain compliance with the ISO 42001 standard.
What should my organization do if we experience challenges in applying the ISO 42001?
BABL AI can help your organization apply the ISO42001 Certification - Artificial Intelligence Management System with our experienced Audit experts.
Digital Services Act Audit
NYC How our Audit works
Pre Audit Scoping
BABL AI walks your team through a series of questions to determine where you currently stand. We then inform your team what documentation is needed for our audit process, and help you understand everything for compliance, including how to test the various selection rates and impact ratios of protected categories as required by the law. NO SOFTWARE DOWNLOADED OR PLATFORM INTEGRATIONS REQUIRED!
Audit Review
Once all your documentary evidence has been submitted for evaluation, our well trained certified auditors review your documentation against our criteria. During this review process, BABL AI auditors might ask for more supporting documentation or interact with your internal and external stakeholders, such as employees or other third-parties, to verify the truth of statements made in the submitted documentation. At the end of the audit review, the auditors reach an overall audit opinion that determines the result of the audit.
Final Report
Once an opinion is determined, BABL AI will draft a public summary report for each Algorithm, if mandated by the regulatory body, and present the final report to your team.
FAQs
Why should I get an Algorithmic Audit?
In addition to ensuring compliance with emerging regulations such as New York City Local Law 144 or the EU AI Act, an algorithmic audit serves as a powerful tool for reassuring your stakeholders that your algorithms undergo vigilant bias monitoring and testing. This proactive approach helps mitigate potential risks posed by your systems while significantly enhancing transparency, ultimately fostering greater trust in your AI systems.
What information is disclosed on the audit public summary?
We transparently provide all legally required information, along with essential context to comprehend the audit opinion and summary of results. Furthermore, we openly disclose our audit criteria and methodology, offering maximal transparency while safeguarding the intellectual property of the audited organization.
What is the Digital Services Act?
The Digital Services Act (DSA) governs the responsibilities of digital service providers that serve as intermediaries, facilitating connections between consumers and goods, services, and content, encompassing platforms like online marketplaces. It establishes a robust framework for transparency and accountability among online platforms, creating a unified regulatory structure across the European Union.
Who is affected by the Digital Services Act in the EU?
Under the Digital Services Act, the European Union classifies online platforms and search engines as entities with 45 million or more monthly users that operate within the EU.
Can you do multiple audits at the same time?
Absolutely! Our audit process is highly adaptable to cater to your unique compliance and assurance requirements. Feel free to get in touch with us and have a conversation with one of our expert auditors to discover the full spectrum of solutions BABL AI can offer.
EEOC AI Bias Audit
How our Audit works
Pre Audit Scoping
BABL AI walks your team through a series of questions to determine where you currently stand. We then inform your team what documentation is needed for our audit process, and help you understand everything for compliance, including how to test the various selection rates and impact ratios of protected categories as required by the law. NO SOFTWARE DOWNLOADED OR PLATFORM INTEGRATIONS REQUIRED!
Audit Review
Once all your documentary evidence has been submitted for evaluation, our well trained certified auditors review your documentation against our criteria. During this review process, BABL AI auditors might ask for more supporting documentation or interact with your internal and external stakeholders, such as employees or other third-parties, to verify the truth of statements made in the submitted documentation. At the end of the audit review, the auditors reach an overall audit opinion that determines the result of the audit.
Final Report
Once an opinion is determined, BABL AI will draft a public summary report for each Algorithm, if mandated by the regulatory body, and present the final report to your team.
FAQs
Why should I get an Algorithmic Audit?
In addition to ensuring compliance with emerging regulations such as New York City Local Law 144, an algorithmic audit serves as a powerful tool for reassuring your stakeholders that your algorithms undergo vigilant bias monitoring and testing. This proactive approach helps mitigate potential risks posed by your systems while significantly enhancing transparency, ultimately fostering greater trust in your AI systems.
What information is disclosed on the audit public summary?
We transparently provide all legally required information, along with essential context to comprehend the audit opinion and summary of results. Furthermore, we openly disclose our audit criteria and methodology, offering maximal transparency while safeguarding the intellectual property of the audited organization.
How long does the audit process take?
On average our audits take between 2-3 weeks to complete after all relevant documentation has been submitted. This timeframe can vary depending on many circumstances like readiness and resubmission of unsatisfactory documentation, but BABL AI will help you at every stage to ensure an easy process.
Can you do multiple audits at the same time?
Absolutely! Our audit process is highly adaptable to cater to your unique compliance and assurance requirements. Feel free to get in touch with us and have a conversation with one of our expert auditors to discover the full spectrum of solutions BABL AI can offer.
Legal Compliance Audit
BABL AI's Audits Ensure Global AI Legal Compliance & Assurance
Whether you're striving for compliance with a single or multiple global AI regulations, we've got you covered. Our comprehensive compliance and assurance audits cover, but are not limited to:
NYC Local Law 144
EU AI Act conformity assessment
Generative AI Bias Audit
Digital Services Act
EEOC
NIST AI Risk Management Framework
ISO standards
New Jersey A4909
California AB 331
How our Audit works
Pre Audit Scoping
BABL AI walks your team through a series of questions to determine where you currently stand. We then inform your team what documentation is needed for our audit process, and help you understand everything for compliance, including how to test the various selection rates and impact ratios of protected categories as required by the law. NO SOFTWARE DOWNLOADED OR PLATFORM INTEGRATIONS REQUIRED!
Audit Review
Once all your documentary evidence has been submitted for evaluation, our well trained certified auditors review your documentation against our criteria. During this review process, BABL AI auditors might ask for more supporting documentation or interact with your internal and external stakeholders, such as employees or other third-parties, to verify the truth of statements made in the submitted documentation. At the end of the audit review, the auditors reach an overall audit opinion that determines the result of the audit.
Final Report
Once an opinion is determined, BABL AI will draft a public summary report for each Algorithm, if mandated by the regulatory body, and present the final report to your team.
FAQs
Why should I get an Algorithmic Audit?
In addition to ensuring compliance with emerging regulations such as New York City Local Law 144, an algorithmic audit serves as a powerful tool for reassuring your stakeholders that your algorithms undergo vigilant bias monitoring and testing. This proactive approach helps mitigate potential risks posed by your systems while significantly enhancing transparency, ultimately fostering greater trust in your AI systems.
What information is disclosed on the audit public summary?
We transparently provide all legally required information, along with essential context to comprehend the audit opinion and summary of results. Furthermore, we openly disclose our audit criteria and methodology, offering maximal transparency while safeguarding the intellectual property of the audited organization.
Can you do multiple audits at the same time?
Absolutely! Our audit process is highly adaptable to cater to your unique compliance and assurance requirements. Feel free to get in touch with us and have a conversation with one of our expert auditors to discover the full spectrum of solutions BABL AI can offer.
Need an Audit or Consulting Services?
BABL AI can help with all your AI Audit and Compliance needs. Reach out to us today for a free consultation!