EU’s Landmark AI Act Enters Into Force, Setting Global Standard for Artificial Intelligence Regulation

Written by Jeremy Werner

Jeremy is an experienced journalists, skilled communicator, and constant learner with a passion for storytelling and a track record of crafting compelling narratives. He has a diverse background in broadcast journalism, AI, public relations, data science, and social media management.
Posted on 08/01/2024
In News

The European Artificial Intelligence Act (EU EU AI Act), the first comprehensive regulation on AI in the world, officially came into force today. This groundbreaking legislation aims to ensure that AI developed and used within the European Union is trustworthy and respects fundamental rights. The EU EU AI Act establishes a harmonized internal market for AI technologies in the EU, promoting innovation and investment while safeguarding human rights.

 

The EU AI Act introduces a forward-looking definition of artificial intelligence, categorizing AI systems based on the level of risk they pose. The regulation outlines four main risk categories: minimal risk, specific transparency risk, high risk, and unacceptable risk.

 

Minimal risk AI systems, such as recommender systems and spam filters, are considered to pose little threat to citizens’ rights and safety. As a result, these systems face no specific obligations under the EU AI Act, though companies can voluntarily adopt additional codes of conduct to enhance transparency and trust.

 

For systems classified under specific transparency risk, such as chatbots and AI-generated content, there are clear requirements to inform users they are interacting with a machine. This category mandates labeling AI-generated content, including deep fakes, and notifying users when biometric categorization or emotion recognition systems are being used. Providers must design these systems so that synthetic content is marked in a machine-readable format, ensuring it is detectable as artificially generated or manipulated.

 

High-risk AI systems, which include applications in recruitment, credit scoring, and autonomous robotics, are subject to strict regulatory requirements. These systems must implement risk-mitigation strategies, maintain high-quality data sets, log activities, provide detailed documentation, ensure human oversight, and achieve high levels of robustness, accuracy, and cybersecurity. Regulatory sandboxes will facilitate responsible innovation and the development of compliant AI systems within this category.

 

AI systems deemed to pose an unacceptable risk are banned under the EU AI Act. This category includes AI applications that manipulate human behavior, such as toys using voice assistance to encourage dangerous actions, systems that enable social scoring, and certain biometric systems, including emotion recognition in workplaces and real-time biometric identification for law enforcement in public spaces, with narrow exceptions. These measures aim to protect fundamental rights and prevent the misuse of powerful AI technologies.

 

The EU AI Act also addresses general-purpose AI models, which are highly capable and versatile AI systems used across various applications. The regulation ensures transparency along the value chain and addresses systemic risks associated with these powerful models.

 

EU Member States have until August 2, 2025, to designate national competent authorities responsible for overseeing the application of AI rules and conducting market surveillance. The European Artificial Intelligence Board, a scientific panel of independent experts, and an advisory forum of diverse stakeholders will support the implementation and enforcement of the EU AI Act. Companies failing to comply with the rules could face fines of up to 7% of their global annual turnover for violations involving banned AI applications, and up to 3% for other infractions.

 

While the majority of the EU AI Act’s provisions will take effect on August 2, 2026, prohibitions on AI systems deemed to present an unacceptable risk will apply after six months, and rules for general-purpose AI models will apply after 12 months. To bridge the transitional period, the European Commission has launched the AI Pact, encouraging AI developers to voluntarily adopt key obligations of the EU AI Act ahead of the legal deadlines.

 

The Commission is also developing guidelines and co-regulatory instruments, such as standards and codes of practice, to facilitate the implementation of the EU AI Act. A call for expressions of interest has been opened for participation in developing the first general-purpose AI Code of Practice, along with a multi-stakeholder consultation process.

 

 

Need Help?

 

If you have questions or concerns about the EU EU AI Act, or any global guidelines, regulations and laws, don’t hesitate to reach out to BABL AI. Their Audit Experts can offer valuable insight, and ensure you’re informed and compliant.

Subscribe to our Newsletter

Keep up with the latest on BABL AI, AI Auditing and
AI Governance News by subscribing to our news letter