The European Union’s landmark Artificial Intelligence (AI) regulation, known as the EU AI Act, has officially been published in the bloc’s Official Journal under Regulation 2024/1689. This significant milestone initiates the countdown to the enforcement of new legal standards designed to regulate AI applications based on their risk levels.
In just several weeks, on August 1, the EU AI Act will come into force, marking the beginning of a phased approach to implementing these new rules. By mid-2026, the provisions of this regulation will be fully applicable to AI developers. However, various deadlines are set between now and then, with different provisions taking effect at different times to ensure a smooth transition. The EU AI Act introduces a risk-based framework that categorizes AI applications into different tiers based on their potential risk to public safety and fundamental rights. This approach aims to balance innovation with the need for stringent oversight where necessary.
The majority of AI applications are deemed low risk and, therefore, will not face strict regulatory scrutiny. These include AI systems used in non-critical areas where the potential for harm is minimal. However, transparency obligations will apply, requiring developers to provide clear information about the AI system’s functioning and limitations.
High-risk AI systems, such as those used in biometric identification, law enforcement, critical infrastructure, employment, and education, are subject to more rigorous standards. Developers of these applications must ensure high levels of data quality, implement anti-bias measures, and conduct thorough risk assessments. Additionally, they must maintain documentation and logs to facilitate oversight and accountability.
For general-purpose AI models, like those underlying popular tools such as OpenAI’s GPT-4, there are specific transparency and systemic risk assessment requirements. These models, if deemed to have a high impact due to their computational capabilities and broad usage, must undergo regular evaluations to ensure they do not pose systemic risks.
The implementation of the EU AI Act involves several critical deadlines:
-
- February 2, 2025: Initial chapters and general obligations start to apply.
-
- August 2, 2025: Transparency and risk assessment requirements for general-purpose AI models become enforceable.
-
- Early 2025: Prohibited use cases, such as social scoring systems and real-time biometric identification in public spaces without proper authorization, will be banned.
-
- April 2025: Codes of practice for in-scope AI applications will be introduced, with the AI Office overseeing their development and implementation.
-
- August 1, 2025: More comprehensive rules for high-risk AI systems will come into effect.
-
- August 2, 2026: Member states are required to establish at least one AI regulatory sandbox to support innovation and compliance.
-
- August 2, 2027: Compliance deadline for providers of high-risk AI systems already on the market.
-
- August 2, 2028: Evaluation and review of the regulation’s effectiveness and the potential need for amendments.
The EU AI Act also outlines stringent penalties for non-compliance. Companies that violate the regulation can face fines up to 6% of their global annual turnover or €30 million, whichever is higher. This high penalty threshold underscores the EU’s commitment to ensuring that AI development and deployment are carried out responsibly and ethically.
The publication of the EU AI Act in the Official Journal is a landmark step towards comprehensive AI regulation in the European Union. It reflects the EU’s proactive stance in setting global standards for AI governance, emphasizing the protection of fundamental rights while fostering innovation.
Need Help?
If you’re wondering how this AI playbook, or laws and regulations on AI could impact you, reach out to BABL AI. Their Audit Experts are ready to help you with your concerns and questions while providing valuable assistance.