Understanding the Implications of the EU AI Act

Written by Jeremy Werner

Jeremy is an experienced journalists, skilled communicator, and constant learner with a passion for storytelling and a track record of crafting compelling narratives. He has a diverse background in broadcast journalism, AI, public relations, data science, and social media management.
Posted on 12/13/2023
In Blog

Understanding the Implications of the EU AI Act

As we approach the conclusion of 2023, the European Union (EU) made significant strides in shaping the future of artificial intelligence with the provisional agreement of the EU AI Act on December 8. This provisional deal, currently in its draft form, is the culmination of two and a half years of collaborative effort by experts. In this blog post, we delve into the key aspects of the EU AI Act and what it means for the future landscape of AI systems within the EU.


Establishing a Future-Proof Legal Framework

The primary goal of the EU AI Act is to create a future-proof legal framework that governs the development, placement, and use of AI systems within the European Union. Notably, the scope extends beyond EU-based systems, as it also applies to organizations outside the EU with AI systems available in the EU marketplace. Rooted in EU values and fundamental rights, the proposal aims to address risks associated with AI without hindering technological development.


Key Measures and Exceptions for AI Systems

While the EU AI Act introduces stringent measures for prohibited and high-risk AI systems, it incorporates exceptions to strike a balance between regulation and flexibility. Noteworthy exemptions include considerations for law enforcement activities and the protection of freedom of expression.


Law Enforcement Exemptions: The proposal acknowledges the role of AI systems in supporting law enforcement activities such as identifying suspects or victims of crime. To mitigate potential risks like discrimination or bias, specific provisions for the use of remote biometric ID systems are included, emphasizing requirements for accuracy, transparency, and accountability.


Freedom of Expression: Recognizing freedom of expression as a fundamental right, the proposal addresses associated risks such as the spread of misinformation and hate speech. Provisions for transparency, accountability, and human oversight are outlined for using AI systems in content moderation, along with mechanisms for redress and complaints from users who believe their rights are violated.


Categorizing AI Systems for Risk Assessment

The proposal categorizes AI systems based on their risk associations, including a distinct category for prohibited AI practices deemed particularly harmful to fundamental rights and societal values. Prohibited AI systems encompass those designed to manipulate human behavior, employ subliminal techniques targeting specific groups, and provide social scoring leading to discriminatory practices.


Risk Assessment and Obligations for High-Risk AI Systems

High-risk AI systems, such as those used in law enforcement, medical devices, migration, and critical infrastructure, face stringent requirements and obligations. These include comprehensive data quality and documentation throughout the design, development, and deployment phases, ensuring transparency, accountability, and human oversight to prevent bias and discrimination. Additionally, high-risk systems must demonstrate high levels of accuracy, reliability, and robustness, provide clear and accessible information to users, and undergo conformity assessment procedures to ensure compliance.


Medium-Risk and Low-Risk AI Systems

The legal framework extends to medium-risk AI systems, incorporating specific obligations proportionate to the potential risks they pose. The distinction between high-risk and medium-risk AI systems lies in the potential harm and corresponding legal requirements.


Low-risk AI systems, while not extensively discussed in the EU AI Act, are recognized as a distinct category expected to comply with existing legislation and ethical standards related to AI development and use.


Fines for Violations

A crucial element of the EU AI Act is the imposition of fines for systems found in violation. Fines range from 35 million Euros or 7% of turnover down to 7.5 million Euros or 1.5%, based on the severity of the infringement and the size of the company.


Path to Implementation and Impact on Companies

While other nations are in the process of releasing guidelines and exploring AI regulations, the EU Parliament could potentially vote the EU AI Act into law by the end of 2023. However, a two-year timeline is anticipated before it comes into full effect.

For companies seeking clarity on how the EU AI Act, and similar regulations globally, may impact their operations, BABL AI‘s team of audit experts is ready to provide assistance.

Subscribe to our Newsletter

Keep up with the latest on BABL AI, AI Auditing and
AI Governance News by subscribing to our news letter