UPDATE – JUNE 2025: The EU AI Act officially entered into force on August 1, 2024. Implications continuing to take effect through 2027. Bans on unacceptable-risk AI and AI literacy requirements began in February 2025. Meanwhile, obligations for general-purpose and high-risk AI systems will phase in from August 2025 to August 2027. While the Act was provisionally agreed upon in December 2023, its phased implementation and global influence are now well underway.
ORIGINAL BLOG POST:
Understanding the Implications of the EU AI Act
As we approach the conclusion of 2023, the European Union (EU) made significant strides in shaping the future of artificial intelligence with the provisional agreement of the EU AI Act on December 8. This provisional deal, currently in its draft form, is the culmination of two and a half years of collaborative effort by experts. In this blog post, we delve into the key aspects of the EU AI Act and what it means for the future landscape of AI systems within the EU.
A Future-Proof Framework for AI Governance
The primary goal of the EU AI Act is to create a future-proof legal framework that governs the development, placement, and use of AI systems within the European Union. Notably, the scope extends beyond EU-based systems. It also applies to organizations outside the EU with AI systems available in the EU marketplace. Rooted in EU values and fundamental rights, the proposal aims to address risks associated with AI without hindering technological development.
Key Measures and Legal Exceptions for AI Systems
While the EU AI Act introduces stringent measures for prohibited and high-risk AI systems, it incorporates exceptions to strike a balance between regulation and flexibility. Noteworthy exemptions include considerations for law enforcement activities and the protection of freedom of expression.
Law Enforcement Exemptions: The proposal acknowledges the role of AI systems in supporting law enforcement activities such as identifying suspects or victims of crime. To mitigate potential risks like discrimination or bias, specific provisions for the use of remote biometric ID systems are included, emphasizing requirements for accuracy, transparency, and accountability.
Freedom of Expression: Recognizing freedom of expression as a fundamental right, the proposal addresses associated risks such as the spread of misinformation and hate speech. Provisions for transparency, accountability, and human oversight are outlined for using AI systems in content moderation, along with mechanisms for redress and complaints from users who believe their rights are violated.
Categorizing AI Systems for Risk Assessment
The proposal categorizes AI systems based on their risk associations, including a distinct category for prohibited AI practices deemed particularly harmful to fundamental rights and societal values. Prohibited AI systems encompass those designed to manipulate human behavior, employ subliminal techniques targeting specific groups, and provide social scoring leading to discriminatory practices.
Risk Assessment and Obligations for High-Risk AI Systems
High-risk AI systems are used in things like law enforcement, medical devices, migration, and critical infrastructure. Those systems face stringent requirements and obligations. These include comprehensive data quality and documentation throughout the design, development, and deployment phases. The process ensures transparency, accountability, and human oversight to prevent bias and discrimination. Additionally, high-risk systems must demonstrate high levels of accuracy, reliability, and robustness. Therefore, providing clear and accessible information to users, while undergoing conformity assessment procedures to ensure compliance.
Medium- and Low-Risk AI Classifications
The legal framework extends to medium-risk AI systems, incorporating specific obligations proportionate to the potential risks they pose. The distinction between high-risk and medium-risk AI systems lies in the potential harm and corresponding legal requirements.
Low-risk AI systems are not extensively discussed in the EU AI Act. They are expected to comply with existing legislation and ethical standards related to AI development and use.
Penalties for Non-Compliance
A crucial element of the EU AI Act is the imposition of fines for systems found in violation. Fines range from 35 million Euros or 7% of turnover down to 7.5 million Euros or 1.5%. It’s based on the severity of the infringement and the size of the company.
The EU AI Act Timeline and Global Impact
The EU Parliament could potentially vote the EU AI Act into law by the end of 2023. However, a two-year timeline is anticipated before it comes into full effect.
Need Help with AI Compliance? Contact BABL AI
Also, you might be seeking clarity on how the AI Act, and similar regulations globally, may impact you and your operations. Therefore, BABL AI‘s team of audit experts is ready to provide assistance.


