The EU AI Act: Understanding its Implications and Implementation Challenges

Written by Jeremy Werner

Jeremy is an experienced journalists, skilled communicator, and constant learner with a passion for storytelling and a track record of crafting compelling narratives. He has a diverse background in broadcast journalism, AI, public relations, data science, and social media management.
Posted on 04/10/2024
In Blog

The recently passed EU AI Act is a significant milestone in the regulation of AI systems across the European Union. The EU AI Act represents a comprehensive new regulation designed to govern the use of AI systems across the EU. It has undergone a rigorous legislative process involving the European Parliament, European Commission, and Council of the European Union to reconcile differing versions into a final agreed text.

 

Following its passage, the Act will soon be officially submitted to the EU’s official journal, expected around May 2023. Once published, it will become a binding regulation after a 20-day period, although immediate compliance is not required. Instead, there is a phased rollout process with key milestones. Six months after entry into force, prohibitions on certain “unacceptable risk” AI systems, such as remote biometric identification without consent, will take effect. Approximately one year after enactment, detailed guidance will be issued to clarify prohibited and high-risk categories. Two years after enactment, companies must fully comply with requirements for high-risk AI systems, including risk management, testing, documentation, and human oversight mandates. 

 

High-risk AI encompasses systems used in critical sectors like education, healthcare, and law enforcement, posing significant risks if not well-governed. There are four groups impacted differently. Prohibited AI is when companies using AI applications likely to fall into the prohibited category, such as remote biometrics, must assess their legality and face penalties for continued use. High-Risk AI is when companies using AI for decision-making in sectors like recruitment must implement stringent compliance measures within two years. Companies using low-risk AI systems will face fewer requirements, the regulation’s principles could impact global AI governance expectations. While the Act addresses ethical concerns, there’s debate over its necessity and impact, particularly on smaller businesses. However, AI oversight is important due to inherent biases.

 

Despite challenges, AI governance is an opportunity for companies to build public trust and differentiate themselves responsibly. Defining prohibited versus high-risk AI remains a challenge, underscoring the importance of forthcoming guidance from the European Commission. T\he EU AI Act marks a significant step in regulating AI, particularly high-risk applications. Companies must prepare comprehensive AI governance strategies to meet compliance deadlines effectively.

 

As an AI auditing firm, BABL AI helps companies navigate these requirements by identifying gaps and offering tailored advice. Looking ahead, Recker and Brown anticipate covering topics like AI ethics frameworks, emerging regulations, technical auditing approaches, and research implications. Don’t hesitate to reach out to BABL AI, their team of Audit Experts can provide valuable insights on implementing AI while answering your questions and concerns.

Subscribe to our Newsletter

Keep up with the latest on BABL AI, AI Auditing and
AI Governance News by subscribing to our news letter