EU AI Act Published in Official Journal, Kicking Off Compliance Deadlines
The European Union’s landmark Artificial Intelligence (AI) regulation, known as the EU AI Act, has officially been published in the bloc’s Official Journal under Regulation 2024/1689. This significant milestone initiates the countdown to the enforcement of new legal standards designed to regulate AI applications based on their risk levels.
On August 1, 2024, the EU AI Act will come into force, triggering a phased approach to enforcement. By mid-2026, most provisions will apply fully to AI developers. In the meantime, several deadlines will arrive in stages, giving businesses time to adjust. The EU’s risk-based framework categorizes AI applications into tiers according to their potential impact on safety and fundamental rights. This design balances the need for innovation with stronger oversight in high-risk areas.
What Falls Under the Law
Most AI applications are classified as low risk and will not face heavy scrutiny. However, developers must still meet transparency obligations, including clear information about how systems work and their limitations. By contrast, high-risk AI systems—such as those in biometric identification, law enforcement, critical infrastructure, employment, and education—must follow strict standards. These include data quality controls, bias mitigation, detailed risk assessments, and ongoing documentation. General-purpose AI models, like OpenAI’s GPT-4, are also in scope. If they carry systemic risks due to their size or use, they must undergo regular evaluations and meet transparency requirements.
EU AI Act Timeline
The implementation of the EU AI Act involves several critical deadlines:
-
- February 2, 2025: Initial chapters and general obligations start to apply.
-
- August 2, 2025: Transparency and risk assessment requirements for general-purpose AI models become enforceable.
-
- Early 2025: Prohibited use cases, such as social scoring systems and real-time biometric identification in public spaces without proper authorization, will be banned.
-
- April 2025: Codes of practice for in-scope AI applications will be introduced, with the AI Office overseeing their development and implementation.
-
- August 1, 2025: More comprehensive rules for high-risk AI systems will come into effect.
-
- August 2, 2026: Member states are required to establish at least one AI regulatory sandbox to support innovation and compliance.
-
- August 2, 2027: Compliance deadline for providers of high-risk AI systems already on the market.
-
- August 2, 2028: Evaluation and review of the regulation’s effectiveness and the potential need for amendments.
Strong Penalties for Violations
To ensure compliance, the EU AI Act includes tough penalties. Companies that break the rules can face fines up to 6% of global annual turnover or €30 million, whichever is higher. This demonstrates the EU’s commitment to making AI both ethical and accountable.
Why It Matters
The publication of the EU AI Act in the Official Journal is more than a technical milestone. It signals the EU’s intent to set the global standard for AI governance, emphasizing fundamental rights, public trust, and responsible innovation.
Need Help?
If you’re wondering how this AI playbook, or laws and regulations on AI could impact you, reach out to BABL AI. Their Audit Experts are ready to help you with your concerns and questions while providing valuable assistance.