UPDATE – JANUARY 2026:
The EU AI Act officially entered into force on August 1, 2024. With that comesvkey provisions in the timeline, which takes effect in stages through 2027. Bans on unacceptable-risk AI practices and AI literacy requirements are already enforceable as of February 2025. However, most high-risk AI system obligations will not be fully in force until August 2026–2027, pending finalization of technical standards and supporting guidance. The Act remains the world’s most comprehensive AI law, influencing legislation globally.
ORIGINAL NEWS STORY:
EU Agrees on Landmark AI Legislation
The European Union (EU) achieved a historic milestone on December 8, 2023, with the European Parliament and Council reaching a provisional agreement on the AI Act. According to an official press release, the Act is designed to ensure the safety of AI systems used in the EU, uphold fundamental human rights, and foster innovation. EU lawmakers established key rules to address risks in AI applications.
Banned Applications
- Biometric categorization using sensitive characteristics
- Scraping facial images to create facial recognition databases
- Emotion recognition in work and education settings
- Social scoring based on behavior
- AI systems that manipulate human behavior
- AI systems that exploit vulnerabilities
Law Enforcement Exemptions
Narrow exceptions permit biometric identification in public spaces for law enforcement purposes, provided there is judicial authorization. “Real-time” use is strictly limited to searching for victims, preventing terrorism, or identifying suspects in specific crimes. “Post-remote” use is restricted to searches related to serious crime victims.
High-Risk System Obligations
For AI systems classified as high-risk, the Act imposes obligations related to fundamental rights impact assessments, transparency, human oversight, and risk management. Individuals will have the right to file complaints and receive explanations for decisions made by high-risk AI systems that affect their rights.
General AI System Requirements
General AI systems and models must adhere to transparency requirements, including technical documentation, copyright compliance, and detailed summaries of training content. High-impact models with systemic risk face additional obligations, covering risk assessments, testing, incident reporting, cybersecurity, and energy efficiency.
Innovation Support
Provisions within the AI Act aim to promote regulatory sandboxes for AI systems and models, fostering real-world testing to assist businesses, particularly small and mid-size enterprises, in developing AI solutions.
Fines
Violations of the AI Act can result in fines of up to €35 million or 7% of global annual turnover, depending on the type and severity of the infringement. Lesser violations may carry penalties of up to €7.5 million or 1.5% of turnover.
Conclusion
The EU AI Act has now entered into force, with enforcement unfolding in stages over the next several years. Organizations developing or deploying AI systems should be preparing now for upcoming obligations, particularly those affecting high-risk and general-purpose AI systems.
Need Help?
Also, you might need assistance in how the EU AI Act timeline, and other bills around the globe, could impact your company. Therefore, reach out to BABL AI. Their team of audit experts will offer valuable guidance.

