What you’ll learn
The short course on the EU AI Act is designed to help developers and deployers of high-risk AI systems navigate the complex regulatory landscape and ensure compliance with the Act’s requirements. The course is structured into several key topics, each focusing on different aspects of the regulation and its implications for AI development.
Introduction to the EU AI Act: This provides an overview of the Act, its purpose, scope, and key definitions. Participants will gain an understanding of the regulatory framework and the importance of compliance for ensuring the safe and ethical deployment of AI systems.
Understanding High-Risk AI Systems: Here participants will learn about the criteria for determining high-risk AI systems, including the areas of application and the risks associated with these systems. Real-world examples and case studies will be used to illustrate the identification process.
Obligations for Developers of High-Risk AI Systems: This topic covers the specific obligations imposed on developers of high-risk AI systems, such as transparency, data governance, technical documentation, and human oversight. Participants will explore best practices for meeting these obligations and the role of quality management systems in ensuring compliance.
Strategies for Implementing Requirements: Here we provide practical guidance on implementing the requirements of the EU AI Act. Participants will learn about risk management strategies, data quality and governance approaches, and techniques for maintaining transparency and human oversight in AI systems.
Achieving Conformity and Obtaining a Conformity Assessment: Here we focus on the conformity assessment process, including the steps to achieve conformity with the Act, the involvement of notified bodies, and the maintenance of compliance over time. Participants will gain insights into the process of obtaining a conformity assessment and the importance of post-market monitoring.
Throughout the course, interactive elements such as quizzes, discussion prompts, and practical exercises will be used to enhance understanding and engagement. Supplementary materials like checklists, templates, and guidelines will be provided to help participants apply the concepts to their own AI systems.
By the end of the course, participants will have a comprehensive understanding of the EU AI Act and the tools and strategies needed to develop and deploy high-risk AI systems in compliance with the regulation.
About the Instructor
Shea Brown is the founder and CEO of BABL AI, a research consultancy that focuses on the ethical use and development of artificial intelligence. His research addresses algorithm auditing and bias in machine learning, and he serves as a ForHumanity Fellow that sets standards for the organizational governance of artificial intelligence.
He has a PhD in Astrophysics from the University of Minnesota and is currently an Associate Professor of Instruction in the Department of Physics & Astronomy at the University of Iowa, where he has been recognized for his teaching excellence from the College of Liberal Arts & Sciences.
Curriculum
Introduction
What you'll learn
Course Resources
Week 1 - Foundation
The main modes of working (14:43)
Spotting risks (11:34)
Researching solutions (9:23)
Effective communications (13:02)
Exercise 1: Putting your knowledge to work (5:26)
Specialized tasks
Overview of non-technical tasks (27:22)
Algorithms, AI and learning machines (21:01
Bias testing (21:41)
Exercise 2: Finding your niche
What now? (3:26)
Don’t just take our word for it
Choose a Pricing Option
EU AI Act
Conformity Requirements for High-Risk AI Systems
Additional qualifying discounts are available
Contact us today to learn more