UPDATE — AUGUST 2025: This article remains accurate and reflects the current legislative status and implementation timeline of the European Union’s Artificial Intelligence Act (AI Act), the first comprehensive AI law in the world. The European Parliament endorsed the Act in March 2024, the European Council gave final approval on May 21, 2024, and it was formally signed on June 13, 2024. The AI Act was published in the EU’s Official Journal on July 12, 2024, and entered into force 20 days later, on or around August 1, 2024.
The AI Act is being implemented in phases:
-
Some bans on unacceptable-risk practices—such as social scoring and certain biometric uses—took effect February 2, 2025.
-
Providers must submit codes of practice for general-purpose AI models by May 2, 2025.
-
Key obligations for high-risk AI systems will be enforceable starting August 1, 2026, with some extensions through August 2, 2027.
-
Guidance, implementing acts, and further clarification continue to be issued by the European Commission to support compliance and oversight.
The Act establishes the AI Office, a Scientific Panel, and the AI Board to support enforcement and consistent application across the EU. It applies only to areas governed by EU law and excludes military and research use. Provisions such as regulatory sandboxes, risk-based obligations, and scaled enforcement for SMEs are accurately reflected in this article.
ORIGINAL NEWS STORY:
European Council Approves Landmark AI Legislation
On May 21, the European Council approved the Artificial Intelligence Act, also known as the EU AI Act, a groundbreaking law designed to harmonize AI regulations across the European Union. This landmark legislation, the first of its kind globally, adopts a risk-based approach to AI regulation, setting stricter rules for higher-risk AI systems to safeguard societal welfare. By doing so, the EU aims to set a global standard for AI regulation, emphasizing trust, transparency, and accountability.
The AI Act seeks to foster the development and adoption of safe and trustworthy AI systems within the EU’s single market, benefiting both private and public sectors. It also aims to protect the fundamental rights of EU citizens while stimulating investment and innovation in AI across Europe. The legislation applies exclusively to areas governed by EU law, with exemptions for military, defense, and research purposes.
The adoption of the AI Act represents a significant milestone for the European Union. Mathieu Michel, Belgian Secretary of State for Digitization, praised the legislation, noting its importance in addressing global technological challenges while creating opportunities for societal and economic advancement. Michel emphasized that the AI Act underscores the need for trust and transparency in handling emerging technologies, ensuring that innovation can thrive in a regulated environment.
Risk Levels
The AI Act categorizes AI systems based on their risk levels. Low-risk AI systems will face minimal transparency obligations, while high-risk AI systems must meet stringent requirements to access the EU market. Certain AI practices, such as cognitive behavioral manipulation and social scoring, will be banned due to their unacceptable risks. Additionally, the use of AI for predictive policing based on profiling and systems that categorize individuals by biometric data, such as race, religion, or sexual orientation, is prohibited. The legislation also addresses general-purpose AI (GPAI) models. GPAI models that do not pose systemic risks will have to adhere to limited transparency requirements, while those with systemic risks will be subject to more stringent regulations.
To ensure proper enforcement of the AI Act, several governing bodies will be established. An AI Office within the European Commission will oversee the enforcement of the rules. A scientific panel of independent experts will support these activities, and an AI Board comprising member states’ representatives will advise on the consistent application of the law. Additionally, an advisory forum will provide technical expertise to the AI Board and the Commission.
Infringements of the AI Act will result in fines based on a percentage of the offending company’s global annual turnover or a predetermined amount, whichever is higher. SMEs and startups will face proportional administrative fines. Before deploying high-risk AI systems, entities providing public services must assess the fundamental rights impact. Increased transparency is mandated for the development and use of high-risk AI systems, with certain users required to register in the EU database for high-risk AI systems and inform individuals when emotion recognition systems are in use.
Sandboxes
The AI Act promotes an innovation-friendly legal framework, encouraging evidence-based regulatory learning. The law includes provisions for AI regulatory sandboxes, allowing for the controlled development, testing, and validation of innovative AI systems in real-world conditions.
Conclusion
Following approval, the AI Act will be signed by the presidents of the European Parliament and the Council, then published in the EU’s Official Journal. It will enter into force 20 days after publication and become applicable two years later, with exceptions for specific provisions. The AI Act is a crucial component of the EU’s policy to advance safe and lawful AI across its single market. The proposal was submitted by Thierry Breton, Commissioner for Internal Market, in April 2021. European Parliament rapporteurs Brando Benifei and Dragoş Tudorache facilitated a provisional agreement on December 8, 2023, paving the way for the AI Act’s adoption.
Need Help?
If you’re wondering how the EU AI Act, or any other AI regulations and laws worldwide could impact you and your business, don’t hesitate to reach out to BABL AI. Their Audit Experts can address your concerns and questions while offering valuable insights.