UPDATE – JUNE 2025: The European Union enacted the AI Act on August 1, 2024. Authorities are rolling it out in phases. The ban on “unacceptable risk” AI systems and the requirement for AI literacy became enforceable on February 2, 2025. Rules for general-purpose AI systems will be enforced starting August 2, 2025. Most obligations for high-risk systems—including conformity assessments and documentation—will take effect on August 2, 2026. BABL AI offers independent audits, training, and documentation support aligned with these deadlines.
ORIGINAL BLOG POST:
Preparing for the EU AI Act
While U.S lawmakers have dominated headlines with AI legislation, the European Union has been making quiet but major progress. The EU AI Act—officially titled the Harmonised Rules on Artificial Intelligence—is designed to serve as a global blueprint. EU lawmaker Brando Benifei, expects it to shape how other countries regulate AI. Final approval could come as soon as next month. Now is the time for companies to prepare.
Also, understanding your company’s position within the EU AI Act’s different risk levels is a critical first step. These categories include minimal, limited, high-risk, and unacceptable risk levels. The Act affects more than just EU-based AI developers. It also applies to non-EU companies placing AI systems on the EU market and any AI system whose outputs are used in the EU—even if deployed elsewhere. In other words, the Act has global reach and will impact most AI systems.
High-Risk AI Systems: Compliance and Documentation
Companies that build high-risk AI systems must put a quality management system in place. This includes technical documentation, regular updates, and monitoring tools. Before these systems enter the market, they must pass a conformity assessment. After launch, providers must maintain system logs and notify regulators, importers, and deployers of any risks or corrective actions.
Deployers also carry compliance duties. They must apply technical and organizational safeguards, enable human oversight, and control data inputs. Providers and distributors need to be alerted if risks arise. Log retention and data protection impact assessments may also be required.
Human Oversight and Transparency Requirements
AI systems designed to interact with individuals must disclose, as appropriate, which functions are AI-enabled, the presence of human oversight, responsible decision-making processes, and the rights of end-users to object. End-users must be informed that they are interacting with an AI system. For any biometric system, obtaining consent before processing biometric or personal data is mandatory. Artificially created or manipulated content must be labeled as inauthentic, and if possible, identify the person who generated or manipulated it.
Also, providers and deployers will encounter a myriad of questions as they prepare for their AI system to enter the market. Key considerations include understanding the intent and type of AI, sourcing of information, validation processes for gathered data, and the origin of the code. Establishing an inventory of AI systems, regardless of their current deployment status, is recommended. This allows organizations to define the intended purpose and capabilities of their AI systems, including detailed information on the AI architecture, infrastructure, and foundation. Transparent procedures and guidelines for AI systems, employee awareness of EU AI Act requirements, and compliance with monitoring, data protection, and other essential requirements for all AI systems should be ensured by organizations to avoid potential dire consequences.
Need Help with EU AI Act Compliance?
Therefore, if you need assistance in navigating EU AI Act compliance, don’t hesitate to contact BABL AI. Hence, one of their audit experts can offer valuable guidance and support before the EU AI Act goes into full effect.