UPDATE — JULY 2025: The information in this blog post is conceptually correct, but several key dates and developments have since been finalized. As of July 2025, the EU AI Act is now fully in force and moving through its phased implementation period:
-
The EU AI Act was published in the Official Journal on July 12, 2024. It entered into force on August 1, 2024.
-
Prohibitions on unacceptable-risk AI systems (e.g., social scoring, emotion recognition in workplaces and schools, and untargeted biometric identification) have been enforceable since February 2, 2025.
-
General-purpose AI obligations and penalties take effect August 2, 2025, with a final Code of Practice expected soon.
-
Full compliance for high-risk AI systems is required by August 2, 2026, including implementation of robust risk management, human oversight, transparency, and post-market monitoring requirements.
The regulation introduces significant penalties—up to €35 million or 7% of global turnover—for noncompliance, depending on the severity of the violation. It also mandates AI literacy, responsible AI design, and conformity assessments for high-risk use cases in sectors like healthcare, education, law enforcement, and employment.
Ongoing developments as of 2025 include:
-
The EU is finalizing secondary legislation and guidance to clarify obligations across the AI lifecycle.
-
A new EU AI Office and European Artificial Intelligence Board are being operationalized to oversee enforcement and harmonization.
-
Member states are preparing to designate notified bodies by August 2025, and must establish regulatory sandboxes by August 2026.
This post remains a useful overview of the EU AI Act’s structure and risk-based approach, but readers should reference the finalized compliance deadlines and updated guidance now available across official EU digital strategy platforms.
ORIGINAL BLOG POST:
The EU AI Act: Understanding its Implications and Implementation Challenges
The recently passed EU AI Act is a significant milestone in the regulation of AI systems across the European Union. The EU AI Act represents a comprehensive new regulation designed to govern the use of AI systems across the EU. It has undergone a rigorous legislative process involving the European Parliament, European Commission, and Council of the European Union to reconcile differing versions into a final agreed text.
Key Phases of Implementation
When the Act entered into force, it didn’t require immediate compliance. Instead, implementation rolls out in phases:
-
6 months after entry: Prohibited AI systems are banned.
-
12 months after: The Commission releases detailed guidance.
-
24 months after: All high-risk AI systems must meet full compliance.
Organizations have time to prepare, but need to start early. Requirements for high-risk systems include:
-
Risk management and mitigation plans
-
Detailed technical documentation
-
Human oversight procedures
-
Ongoing post-market monitoring
Which Sectors Are Affected?
High-risk AI applies to critical sectors such as:
-
Healthcare (e.g., diagnostics, treatment planning)
-
Education (e.g., student assessment tools)
-
Employment (e.g., resume screening, performance evaluation)
-
Law enforcement (e.g., predictive policing, surveillance)
These systems must go through strict conformity assessments before being used.
Prohibited systems, like real-time biometric identification in public spaces, are banned outright unless a strict public exemption applies.
Limited-risk systems, such as chatbots, require transparency disclosures. Minimal-risk tools, like spam filters, face no obligations.
Implementation Challenges
Many businesses—especially small and mid-sized companies—face challenges:
-
Understanding the risk classification of their AI tools
-
Building documentation and audit trails from scratch
-
Finding experts who understand both tech and compliance
-
Keeping up with evolving guidance and secondary legislation
Still, the Act is a strategic opportunity. Organizations that move early can build trust, differentiate themselves, and reduce long-term regulatory risk.
Final Thoughts
The EU AI Act sets a global benchmark for AI regulation. It tackles systemic risks, introduces strong enforcement mechanisms, and promotes responsible AI by design. Success will depend on how companies prepare. Legal, technical, and operational teams must collaborate to ensure AI systems meet the law’s expectations.
Need Help?
As an AI auditing firm, BABL AI helps companies navigate these requirements by identifying gaps and offering tailored advice. Don’t hesitate to reach out to BABL AI. Their team of Audit Experts can provide valuable insights on implementing AI while answering your questions and concerns.