UPDATE — AUGUST 2025: The EU Artificial Intelligence Act (AI Act), the world’s first comprehensive AI law, is now in phased implementation.
-
The European Parliament endorsed the Act in March 2024.
-
The European Council gave final approval on May 21, 2024.
-
It was signed on June 13, 2024, and published in the Official Journal on July 12, 2024.
-
The law entered into force around August 1, 2024.
Implementation is staggered. Bans on unacceptable-risk practices, including social scoring and some biometric uses, took effect February 2, 2025. Providers must submit codes of practice for general-purpose AI models by May 2, 2025. Key high-risk obligations begin August 1, 2026, with some extensions until August 2, 2027.
The law also created the AI Office, a Scientific Panel, and the AI Board to support enforcement. It applies only to areas under EU law and excludes military and research use. Sandboxes, scaled rules for SMEs, and risk-based obligations remain central features.
ORIGINAL NEWS STORY:
European Council Approves Landmark AI Legislation
On May 21, the European Council approved the Artificial Intelligence Act, also known as the EU AI Act, a groundbreaking law designed to harmonize AI regulations across the European Union. This landmark legislation, the first of its kind globally, adopts a risk-based approach to AI regulation, setting stricter rules for higher-risk AI systems to safeguard societal welfare. By doing so, the EU aims to set a global standard for AI regulation, emphasizing trust, transparency, and accountability.
The AI Act seeks to foster the development and adoption of safe and trustworthy AI systems within the EU’s single market, benefiting both private and public sectors. It also aims to protect the fundamental rights of EU citizens while stimulating investment and innovation in AI across Europe. The legislation applies exclusively to areas governed by EU law, with exemptions for military, defense, and research purposes.
The adoption of the AI Act represents a significant milestone for the European Union. Mathieu Michel, Belgian Secretary of State for Digitization, praised the legislation, noting its importance in addressing global technological challenges while creating opportunities for societal and economic advancement. Michel emphasized that the AI Act underscores the need for trust and transparency in handling emerging technologies, ensuring that innovation can thrive in a regulated environment.
Risk Levels
The AI Act categorizes AI systems based on their risk levels. Low-risk AI systems will face minimal transparency obligations, while high-risk AI systems must meet stringent requirements to access the EU market. Certain AI practices, such as cognitive behavioral manipulation and social scoring, will be banned due to their unacceptable risks. Additionally, the use of AI for predictive policing based on profiling and systems that categorize individuals by biometric data, such as race, religion, or sexual orientation, is prohibited. The legislation also addresses general-purpose AI (GPAI) models. GPAI models that do not pose systemic risks will have to adhere to limited transparency requirements, while those with systemic risks will be subject to more stringent regulations.
Governance and Enforcement
The Act created several enforcement bodies. The AI Office within the European Commission oversees the rules. A scientific panel of experts supports technical work. An AI Board of member state representatives ensures consistent application, and an advisory forum offers additional expertise.
Violations can lead to steep fines, based on global annual turnover or set amounts. SMEs and startups face proportionate penalties. Before deploying high-risk AI in public services, providers must perform fundamental rights impact assessments. Transparency rules also require certain users to register high-risk AI in an EU database and inform people when using emotion recognition.
Innovation and Sandboxes
To encourage responsible innovation, the Act introduces regulatory sandboxes. These allow companies to test and validate AI in real-world conditions under supervision.
Conclusion
Following approval, the AI Act will be signed by the presidents of the European Parliament and the Council, then published in the EU’s Official Journal. It will enter into force 20 days after publication and become applicable two years later, with exceptions for specific provisions. The AI Act is a crucial component of the EU’s policy to advance safe and lawful AI across its single market. The proposal was submitted by Thierry Breton, Commissioner for Internal Market, in April 2021. European Parliament rapporteurs Brando Benifei and Dragoş Tudorache facilitated a provisional agreement on December 8, 2023, paving the way for the AI Act’s adoption.
Need Help?
If you’re wondering how the EU AI Act, or any other AI regulations and laws worldwide could impact you and your business, don’t hesitate to reach out to BABL AI. Their Audit Experts can address your concerns and questions while offering valuable insights.