Preparing for the EU AI Act
While the United States has been dominating recent headlines on AI laws, European Union lawmakers are diligently working on their groundbreaking legislation, the Harmonised Rules on Artificial Intelligence, known as the EU AI Act, behind closed doors. According to EU lawmaker Brando Benifei, the EU AI Act is expected to serve as a global blueprint, shaping the regulatory landscape for AI in various countries. Draft rules for the EU AI Act could gain approval next month, signaling the importance for companies to commence preparations promptly.
A crucial step in readiness involves understanding where a company stands in the different risk levels outlined in the EU AI Act: minimal risk, limited risk, high-risk, and unacceptable risk. The impact of the EU AI Act extends not only to providers of AI systems based in the EU but also to providers in third countries placing AI systems in the EU market, as well as providers in the EU utilizing AI systems and deployers in third countries if the output of their AI systems is used in the EU. Given its comprehensiveness, the EU AI Act is likely to affect a majority of AI systems.
Providers of high-risk AI systems must establish a quality management system equipped with a robust monitoring system and up-to-date technical documentation. Before entering the market, high-risk AI systems must undergo a conformity assessment procedure, and once on the market, maintain logs generated by the AI system to ensure ongoing compliance. Relevant national competent authorities, distributors, importers, and deployers must be informed of risks related to the AI systems and any corrective actions taken.
Deployers of high-risk AI systems are obligated to implement appropriate technical and organizational measures to ensure compliance. Human oversight and control over input data are mandatory, with providers or distributors being promptly informed of any risks associated with the system. Generated logs must be retained, and data protection impact assessments are required when applicable.
AI systems designed to interact with individuals must disclose, as appropriate, which functions are AI-enabled, the presence of human oversight, responsible decision-making processes, and the rights of end-users to object. End-users must be informed that they are interacting with an AI system. For any biometric system, obtaining consent before processing biometric or personal data is mandatory. Artificially created or manipulated content must be labeled as inauthentic, and if possible, identify the person who generated or manipulated it.
Providers and deployers will encounter a myriad of questions as they prepare for their AI system to enter the market. Key considerations include understanding the intent and type of AI, sourcing of information, validation processes for gathered data, and the origin of the code. Establishing an inventory of AI systems, regardless of their current deployment status, is recommended. This allows organizations to define the intended purpose and capabilities of their AI systems, including detailed information on the AI architecture, infrastructure, and foundation. Transparent procedures and guidelines for AI systems, employee awareness of EU AI Act requirements, and compliance with monitoring, data protection, and other essential requirements for all AI systems should be ensured by organizations to avoid potential dire consequences.
For assistance in navigating EU AI Act compliance, don’t hesitate to contact BABL AI. One of their audit experts can offer valuable guidance and support before the EU AI Act goes into full effect.