European Council Approves Landmark AI Legislation

Written by Jeremy Werner

Jeremy is an experienced journalist, skilled communicator, and constant learner with a passion for storytelling and a track record of crafting compelling narratives. He has a diverse background in broadcast journalism, AI, public relations, data science, and social media management.
Posted on 05/21/2024
In News

UPDATE – FEBRUARY 2026:

Since the August 2025 update, the EU Artificial Intelligence Act has moved from early implementation into operational rollout. While the overall phased timeline and core obligations remain unchanged, several institutional and practical developments have advanced enforcement preparation across the European Union.

The European Commission’s AI Office became fully operational in January 2026. It took responsibility for coordinating oversight of general-purpose AI (GPAI) models, managing cooperation with national authorities, and supporting consistent implementation across member states. The Office is now working alongside the AI Board and scientific expert structures established under the Act. Consequently, it aims to harmonize guidance and enforcement practices.

Following the May 2, 2025 deadline, major AI providers submitted draft Codes of Practice for GPAI models. These submissions are currently under review, with the Commission and member states negotiating revisions. The goal is to produce harmonized standards by mid-2026. During late 2025, the Commission also issued technical guidance clarifying how “systemic risk” GPAI models will be identified. This process focuses on factors such as model scale, computational resources, and downstream societal impact.

Preparation for the next major milestone — high-risk AI obligations beginning August 1, 2026 — is now underway. Regulators and industry participants are actively using AI regulatory sandboxes to test conformity assessment processes and practical compliance approaches. This testing is particularly important in sectors such as healthcare, finance, and critical infrastructure.

Early enforcement signals have also emerged. National regulators in several member states issued preliminary warnings related to prohibited biometric categorization practices. This reflects the enforcement phase that began when unacceptable-risk bans took effect in February 2025.

International coordination has expanded as well. The EU has begun aligning technical discussions with global partners, including the United States, on topics such as model evaluation and transparency reporting. However, the EU AI Act remains distinct in its legal structure and risk-based framework.

Overall, the AI Act’s legal deadlines remain unchanged, but implementation has matured significantly. The focus has shifted from adoption to operational enforcement, technical guidance, and preparation for the upcoming high-risk compliance phase in 2026.

ORIGINAL NEWS STORY:

European Council Approves Landmark AI Legislation

 

On May 21, the European Council approved the Artificial Intelligence Act, also known as the EU AI Act, a groundbreaking law designed to harmonize AI regulations across the European Union. This landmark legislation, the first of its kind globally, adopts a risk-based approach to AI regulation. It sets stricter rules for higher-risk AI systems to safeguard societal welfare. By doing so, the EU aims to set a global standard for AI regulation. The emphasis is on trust, transparency, and accountability.

 

The AI Act seeks to foster the development and adoption of safe and trustworthy AI systems within the EU’s single market, benefiting both private and public sectors. It also aims to protect the fundamental rights of EU citizens while stimulating investment and innovation in AI across Europe. The legislation applies exclusively to areas governed by EU law, with exemptions for military, defense, and research purposes.

 

The adoption of the AI Act represents a significant milestone for the European Union. Mathieu Michel, Belgian Secretary of State for Digitization, praised the legislation. He noted its importance in addressing global technological challenges while creating opportunities for societal and economic advancement. Furthermore, Michel emphasized that the AI Act underscores the need for trust and transparency in handling emerging technologies. This ensures that innovation can thrive in a regulated environment.

 

Risk Levels

 

The AI Act categorizes AI systems based on their risk levels. Low-risk AI systems will face minimal transparency obligations. Meanwhile, high-risk AI systems must meet stringent requirements to access the EU market. Certain AI practices, such as cognitive behavioral manipulation and social scoring, will be banned due to their unacceptable risks. Additionally, the use of AI for predictive policing based on profiling and systems that categorize individuals by biometric data, such as race, religion, or sexual orientation, is prohibited. The legislation also addresses general-purpose AI (GPAI) models. GPAI models that do not pose systemic risks will have to adhere to limited transparency requirements. In contrast, those with systemic risks will be subject to more stringent regulations.

 

Governance and Enforcement

 

The Act created several enforcement bodies. The AI Office within the European Commission oversees the rules. A scientific panel of experts supports technical work. In addition, an AI Board of member state representatives ensures consistent application. An advisory forum offers additional expertise.

Violations can lead to steep fines, based on global annual turnover or set amounts. SMEs and startups face proportionate penalties. Before deploying high-risk AI in public services, providers must perform fundamental rights impact assessments. Transparency rules also require certain users to register high-risk AI in an EU database. Furthermore, they must inform people when using emotion recognition.

 

Innovation and Sandboxes

 

To encourage responsible innovation, the Act introduces regulatory sandboxes. These allow companies to test and validate AI in real-world conditions under supervision.

 

Conclusion

 

Following approval, the AI Act will be signed by the presidents of the European Parliament and the Council. Then it will be published in the EU’s Official Journal. It will enter into force 20 days after publication and become applicable two years later, with exceptions for specific provisions. The AI Act is a crucial component of the EU’s policy to advance safe and lawful AI across its single market. The proposal was submitted by Thierry Breton, Commissioner for Internal Market, in April 2021. European Parliament rapporteurs Brando Benifei and Dragoş Tudorache facilitated a provisional agreement on December 8, 2023. This paved the way for the AI Act’s adoption.

 

Need Help?

 

If you’re wondering how the EU AI Act, or any other AI regulations and laws worldwide could impact you and your business, don’t hesitate to reach out to BABL AI. Their Audit Experts can address your concerns and questions while offering valuable insights.

Subscribe to our Newsletter

Keep up with the latest on BABL AI, AI Auditing and
AI Governance News by subscribing to our news letter