UPDATE – MAY 2025: Since this post was first published, the EU AI Act was formally adopted in March 2024. The four-tier risk classification system—minimal, limited, high, and unacceptable risk—remains central to the final version. While some details may continue to evolve during implementation and guidance rollouts, the core obligations tied to each risk level are now confirmed. BABL AI provides support with AI risk classification, documentation, and Conformity Assessments to help organizations comply with the EU AI Act.
ORIGINAL BLOG:
Breaking Down the EU AI Act’s Four-Tier Risk Classification System
As the EU finalizes its Harmonised Rules on Artificial Intelligence, commonly referred to as the EU AI Act, numerous questions persist about various aspects of this extensive legislation. The EU AI Act aims to regulate AI systems based on the level of risk they pose, categorizing them into minimal-risk, limited-risk, high-risk, and unacceptable risk. The classification into these categories determines the obligations and restrictions applied under the EU AI Act, targeting regulation at the highest risk AI applications while imposing fewer regulations, if any, on minimal-risk AI systems.
1. Minimal-Risk AI Systems
Minimal-risk systems are those with limited impact on users or society. These systems are subject to basic transparency obligations, such as:
-
Informing users they are interacting with AI
-
Offering clear documentation
-
Maintaining human oversight
Examples:
-
Spam filters in email
-
AI in video games (e.g., non-playable characters or NPCs)
Minimal-risk AI generally faces no additional regulatory burden under the EU AI Act.
2. Limited-Risk AI Systems
Limited-risk systems face moderate obligations, primarily centered around transparency, accountability, and responsible deployment. Before going to market, these systems may need to undergo a Conformity Assessment by internal or third-party evaluators.
Core requirements:
-
Risk management protocols
-
Record-keeping
-
Transparency disclosures
-
Human oversight
Examples:
-
AI used in credit scoring
-
Automated recruiting tools
-
Product recommendation engines
3. High-Risk AI Systems
High-risk systems carry the most extensive compliance requirements. These are systems with the potential to significantly affect safety, rights, or critical infrastructure. The law mandates that they meet rigorous standards before deployment.
Requirements include:
-
Use of high-quality training datasets
-
Risk assessment and mitigation processes
-
Human oversight mechanisms
-
Detailed user instructions
-
Cybersecurity protections
-
Continuous post-market monitoring
Examples:
-
AI used in medical diagnoses or law enforcement
-
Systems in education, transportation, or court systems
-
Employee monitoring tools
4. Unacceptable-Risk AI Systems
Unacceptable-risk systems are completely prohibited under the EU AI Act. These technologies are deemed to violate fundamental rights or pose unacceptable harm to individuals or society.
Examples:
-
Social scoring systems
-
Predictive policing that violates rights
-
Subliminal or manipulative AI
-
Lethal autonomous weapons
-
Deepfakes used to cause public harm
-
Systems that exploit vulnerable populations
Exemptions and Special Cases
Not all AI systems fall neatly into these categories. The EU AI Act allows exemptions for:
-
AI used exclusively for national security purposes
-
Systems developed for research and innovation, provided they are not placed on the market
-
Open-source tools, with limited exceptions
-
Startups and SMEs, which may receive delayed timelines or adjusted requirements when building high-risk systems
The law also builds in flexibility for edge cases, with further guidance expected as implementation unfolds.
Need Help Understanding Your Risk Level?
Most companies will be impacted by the EU AI Act in some way—even if they’re outside the EU. BABL AI provides end-to-end support, including:
-
AI risk classification
-
Conformity Assessments
-
Regulatory documentation
-
Compliance strategy for high-risk systems
Contact us to get started with an audit or speak with one of our EU AI Act compliance experts.
Reach out to BABL AI. One of their Audit Experts can provide valuable assistance.