EU AI Act

Conformity Requirements for High-Risk AI Systems

What you’ll learn

The short course on the EU AI Act is designed to help developers and deployers of high-risk AI systems navigate the complex regulatory landscape and ensure compliance with the Act’s requirements. The course is structured into several key topics, each focusing on different aspects of the regulation and its implications for AI development.

Introduction to the EU AI Act: This provides an overview of the Act, its purpose, scope, and key definitions. Participants will gain an understanding of the regulatory framework and the importance of compliance for ensuring the safe and ethical deployment of AI systems.

Understanding High-Risk AI Systems: Here participants will learn about the criteria for determining high-risk AI systems, including the areas of application and the risks associated with these systems. Real-world examples and case studies will be used to illustrate the identification process.

Obligations for Developers of High-Risk AI Systems: This topic covers the specific obligations imposed on developers of high-risk AI systems, such as transparency, data governance, technical documentation, and human oversight. Participants will explore best practices for meeting these obligations and the role of quality management systems in ensuring compliance.

Strategies for Implementing Requirements: Here we provide practical guidance on implementing the requirements of the EU AI Act. Participants will learn about risk management strategies, data quality and governance approaches, and techniques for maintaining transparency and human oversight in AI systems.

Achieving Conformity and Obtaining a Conformity Assessment: Here we focus on the conformity assessment process, including the steps to achieve conformity with the Act, the involvement of notified bodies, and the maintenance of compliance over time. Participants will gain insights into the process of obtaining a conformity assessment and the importance of post-market monitoring.

Throughout the course, interactive elements such as quizzes, discussion prompts, and practical exercises will be used to enhance understanding and engagement. Supplementary materials like checklists, templates, and guidelines will be provided to help participants apply the concepts to their own AI systems.

By the end of the course, participants will have a comprehensive understanding of the EU AI Act and the tools and strategies needed to develop and deploy high-risk AI systems in compliance with the regulation.

About the Instructor

Dr. Shea Brown, CEO and Founder of BABL AI: Shea is an internationally recognized leader in AI and algorithm auditing, bias in machine learning, and AI governance. He has testified and advised on numerous AI regulations in the US and EU. He is a Fellow at ForHumanity, a non-profit working to set standards for algorithm auditing and organizational governance of artificial intelligence. He is also a founding member of the International Association of Algorithmic Auditors, a community of practice that aims to advance and organize the algorithmic auditing profession, promote AI auditing standards, certify best practices and contribute to the emergence of Responsible AI. He has a PhD in Astrophysics from the University of Minnesota and is currently a faculty member in the Department of Physics & Astronomy at the University of Iowa, where he has been recognized for his teaching excellence from the College of Liberal Arts & Sciences.

Curriculum

Introduction

What you'll learn (7:40)

Course Resources

1 - Foundations

Overview of the act (12:11)

Risk categorization (24:57)

Definition of actors (7:23)

Obligations for Providers of High-Risk AI Systems (10:32)

Obligations for Deployers of High-Risk AI Systems (9:29)

Exercise #1: Risk Categorization (3:16)

2 - Strategies for Compliance

Quality Management System (26:18)

Risk Assessment & Management (25:00)

Transparency & Information Provision (20:37)

Exercise #2: Compliance Assessment

Strategies for Compliance II

Data Governance & Quality (19:16)

Accuracy, Robustness, & Cybersecurity (12:29)

Human Oversight & Monitoring (11:48)

Don’t just take our word for it

Here’s what our graduates have to say…

What I particularly appreciated about this program was its ability to strengthen my understanding in areas where my knowledge was previously relatively limited. The content was not only thoroughly well-balanced, covering both foundational and advanced topics, but also very engaging and interesting. The unique nature of the program really stood out; despite my extensive search for similar courses, none matched the depth and relevance in risk management offered here.

-Tony Hibbert – AI Governance Expert, ING Bank

Throughout the course, I gleaned a framework for the science, laws and critical thinking behind AI auditing. Dr. Shea Brown and his team provided an excellent balance of theoretical knowledge and practical applications so that I could feel confident in working with a team to identify, analyze and test risks of the systems all around us… BABL AI created a welcoming environment for people regardless of professional background. I felt confident in scaling my knowledge and skills without a science background and also in my lived experience being respected as part of the auditing process.

– Luna

While learning, I was impressed by its well-structured design, practical content, and accessible resources. Whether you’re navigating your AI career or pursuing self-education for AI literacy, I highly recommend you enjoy the learning journey. You’ll gain not only AI knowledge and insights but also invaluable advice, confidence, and professional support from the community. The program’s comprehensive approach and engaging materials make it a standout choice for anyone looking to deepen their understanding of AI.

Kelvin Lou – Tech lawyer, Executive Director of OneCompliance Consulting

The BABL AI & Algorithm Auditor Certification program stands out in the responsible AI field for its practical, community-driven approach. Dr. Brown is an exceptional mentor whose emphasis on first-principles thinking helped me break down complex theoretical and technical concepts into accessible, actionable insights. As someone new to the field, the bi-weekly Q&A sessions were invaluable—providing me the chance to interact with peers and learn from their diverse experiences in a supportive environment. Most importantly, the program transformed me into a confident responsible AI practitioner. I now have the skills to identify AI system risks, audit algorithms, conduct assurance engagements, and guide organizations through AI governance challenges.

Edward Feldman – Manager at Target

Following through with BABL’s “AI and Algorithm Auditor Certificate Program” has been a great decision and learning experience. I am yet to find another program that puts so much emphasis on a holistic approach for AI Auditing, aiming to connect the dots between areas like bias testing or AI governance. At the same time, the hands-on assignments incentivize one to thoroughly grasp the theoretical content and to apply it to one’s own professional context. Paired with Dr. Shea Brown’s gift for conveying knowledge in an engaging way, the kind support of the BABL team and the welcoming learning community, I recommend this program to technical and non-technical professionals alike who want to dive deeper into AI Auditing.

Nadine Dammaschk – AI Governance Advisor at GIZ

In today’s AI-driven world, where algorithms are quietly influencing everything from credit decisions to healthcare, being a passive observer is no longer enough. That’s why I chose to embark on the AI and Algorithm Auditor Certification with BABL AI—and it has been a game-changer. It was an eye-opening, mind-expanding experience that equipped me with practical tools to assess, audit, and advocate for responsible AI. From uncovering hidden biases to designing risk mitigation strategies, I now have the confidence to engage with organizations and help them embed fairness, transparency, and accountability deep into their AI systems.

– Asim Butt – AI Ethics/Risk Auditor

Thanks to the rigorous program at BABL AI, I am now equipped with the expertise to consult companies and governments on responsible AI. I can guide organizations in building AI products that prioritize fairness, transparency, and explainability.

Additionally, I can assist in sourcing AI products that adhere to responsible AI principles, educate customers about the risks associated with generative and discriminative AI, develop responsible AI governance capabilities, and collaborate with governments to implement responsible AI initiatives. My journey with BABL AI has been a remarkable one.

– Abhinav

I have done many training programs in the field of AI Ethics and AI Assessments and/or Audits and I can say that the AI and Algorithm Auditor Certificate Program from BABL AI is definitely one of the best on the market. Prof. Shea Brown does an excellent job both as Instructor and Mentor. Shea and the BABL AI team were always kind and helpful. We did a cohort based training as the first movers, I must admit that the cohort was amazing and made the training a great journey but it is also suitable for individuals who can do it on their own, adjusting their pace with their other responsibilities.

– Mert

Choose a Pricing Option

EU AI Act – Conformity Requirements for High Risk Systems

A certification course for risk and compliance professionals

Additional qualifying discounts are available

Contact us today to learn more