UPDATE — JULY 2025: The information in this blog post is conceptually correct, but several key dates and developments have since been finalized. As of July 2025, the EU AI Act is now fully in force and moving through its phased implementation period:
-
The EU AI Act was published in the Official Journal on July 12, 2024. It entered into force on August 1, 2024.
-
Prohibitions on unacceptable-risk AI systems (e.g., social scoring, emotion recognition in workplaces and schools, and untargeted biometric identification) have been enforceable since February 2, 2025.
-
General-purpose AI obligations and penalties take effect August 2, 2025, with a final Code of Practice expected soon.
-
Full compliance for high-risk AI systems is required by August 2, 2026, including implementation of robust risk management, human oversight, transparency, and post-market monitoring requirements.
The regulation introduces significant penalties—up to €35 million or 7% of global turnover—for noncompliance, depending on the severity of the violation. It also mandates AI literacy, responsible AI design, and conformity assessments for high-risk use cases in sectors like healthcare, education, law enforcement, and employment.
Ongoing developments as of 2025 include:
-
The EU is finalizing secondary legislation and guidance to clarify obligations across the AI lifecycle.
-
A new EU AI Office and European Artificial Intelligence Board are being operationalized to oversee enforcement and harmonization.
-
Member states are preparing to designate notified bodies by August 2025, and must establish regulatory sandboxes by August 2026.
This post remains a useful overview of the EU AI Act’s structure and risk-based approach, but readers should reference the finalized compliance deadlines and updated guidance now available across official EU digital strategy platforms.
ORIGINAL BLOG POST:
The EU AI Act: Understanding its Implications and Implementation Challenges
The recently passed EU AI Act is a significant milestone in the regulation of AI systems across the European Union. The EU AI Act represents a comprehensive new regulation designed to govern the use of AI systems across the EU. It has undergone a rigorous legislative process involving the European Parliament, European Commission, and Council of the European Union to reconcile differing versions into a final agreed text.
Following its passage, the Act will soon be officially submitted to the EU’s official journal, expected around May 2023. Once published, it will become a binding regulation after a 20-day period, although immediate compliance is not required. Instead, there is a phased rollout process with key milestones. Six months after entry into force, prohibitions on certain “unacceptable risk” AI systems, such as remote biometric identification without consent, will take effect. Approximately one year after enactment, detailed guidance will be issued to clarify prohibited and high-risk categories. Two years after enactment, companies must fully comply with requirements for high-risk AI systems, including risk management, testing, documentation, and human oversight mandates.
High-risk AI encompasses critical sectors like education, healthcare, and law enforcement, posing significant risks if not well-governed. There are four groups impacted differently. Prohibited AI is when companies using AI applications likely to fall into the prohibited category, such as remote biometrics, must assess their legality and face penalties for continued use. High-Risk AI is when companies using AI for decision-making in sectors like recruitment must implement compliance within two years. Companies using low-risk AI systems will face fewer requirements, the regulation’s principles could impact global AI governance expectations. While the Act addresses ethical concerns, there’s debate over its necessity and impact, particularly on smaller businesses. However, AI oversight is important due to inherent biases.
Conclusion
Despite challenges, AI governance is an opportunity for companies to build public trust and differentiate themselves responsibly. Defining prohibited versus high-risk AI remains a challenge, underscoring the importance of forthcoming guidance from the European Commission. The EU AI Act marks a significant step in regulating AI, particularly high-risk applications. Companies must prepare comprehensive AI governance strategies to meet compliance deadlines effectively.
Need Help?
As an AI auditing firm, BABL AI helps companies navigate these requirements by identifying gaps and offering tailored advice. Don’t hesitate to reach out to BABL AI. Their team of Audit Experts can provide valuable insights on implementing AI while answering your questions and concerns.