What are the different risk levels in the EU AI Act?

Written by Jeremy Werner

Jeremy is an experienced journalists, skilled communicator, and constant learner with a passion for storytelling and a track record of crafting compelling narratives. He has a diverse background in broadcast journalism, AI, public relations, data science, and social media management.
Posted on 10/04/2023
In Blog

What are the different risk levels in the EU AI Act?

Even though the European Union Parliament is still in the negotiation phase regarding the Harmonised Rules on Artificial Intelligence, commonly referred to as the EU AI Act, numerous questions persist about various aspects of this extensive legislation. The EU AI Act aims to regulate AI systems based on the level of risk they pose, categorizing them into minimal-risk, limited-risk, high-risk, and unacceptable risk. The classification into these categories determines the obligations and restrictions applied under the EU AI Act, targeting regulation at the highest risk AI applications while imposing fewer regulations, if any, on minimal-risk AI systems.

 

According to the EU AI Act, minimal-risk AI systems must comply with transparency obligations, such as declaring their AI nature. In most cases, this entails providing information to users when interacting with an AI system, along with documentation and human oversight. Examples of minimal-risk AI systems may include AI-enabled spam filters in emails, and a classic example of a no-risk AI system would be a video game with non-playable characters, or NPCs, which may face no regulation under the EU AI Act.

 

Moving to limited-risk AI systems, the EU AI Act stipulates that these systems must undergo conformity assessments before being placed on the market. This involves testing and certification by manufacturers or third-party bodies. Additional obligations include risk management, record-keeping, transparency, and human oversight. Limited-risk AI systems encompass applications in credit scoring, HR recruitment, and product recommendations, with requirements designed to be proportionate to the associated risk.

 

For high-risk AI systems, the EU AI Act imposes strict obligations before their use, encompassing rigorous testing, risk management procedures, high-quality datasets, extensive documentation, cybersecurity measures, human oversight, and detailed user information, among other requirements. Examples of high-risk AI systems include those used in critical infrastructures such as energy, AI systems for law enforcement officials, court systems, medical diagnoses, safety components in transportation, employment and employee monitoring, and education practices.

 

The final risk level under the EU AI Act is unacceptable risk. This category is reserved for AI systems that are outright prohibited due to posing unacceptable risks and violating human values and rights. Examples include AI systems that exploit vulnerable groups, engage in mass surveillance of the public, utilize deepfakes for public harm, are implemented in lethal autonomous weapons, create social scores, or employ subliminal disinformation tactics.

 

While most AI systems fall into these four risk levels, exceptions exist. For instance, AI systems developed exclusively for national security purposes may be exempt under the EU AI Act, as well as those developed for research and innovation, provided they are not put into public service or placed on the market. Start-ups creating high-risk AI systems and small-scale providers may qualify for lighter obligations or have a delayed compliance timeline. While the EU AI Act provides a broad risk-based framework, it also allows for the examination of various instances of special cases. Flexibility is being built into the EU AI Act to accommodate conflicting interests and innovation. However, it’s crucial to note that since the details of the EU AI Act are still being refined, this information is subject to change.

If you have questions about where your AI system falls within the four risk levels or need assistance preparing for an EU AI Act Conformity Assessment, reach out to BABL AI. One of their Audit Experts can provide valuable assistance.

Subscribe to our Newsletter

Keep up with the latest on BABL AI, AI Auditing and
AI Governance News by subscribing to our news letter