EU’s Landmark AI Act Enters Into Force, Setting Global Standard for Artificial Intelligence Regulation

Written by Jeremy Werner

Jeremy is an experienced journalist, skilled communicator, and constant learner with a passion for storytelling and a track record of crafting compelling narratives. He has a diverse background in broadcast journalism, AI, public relations, data science, and social media management.
Posted on 08/01/2024
In News

UPDATE — AUGUST 2025: The EU AI Act officially took effect on August 1, 2024, with its phased compliance deadlines now underway. As of February 2, 2025, the bans on AI systems presenting “unacceptable risk” — including social scoring, certain biometric categorization, workplace emotion recognition, and manipulative systems targeting vulnerable groups — are fully enforceable.

On August 2, 2025, the general-purpose AI (GPAI) provisions came into force. These rules require transparency, documentation, copyright compliance, risk assessments, and cybersecurity measures for GPAI providers. The European Commission finalized codes of practice for GPAI in July 2025 to help providers meet these obligations.

EU Member States were required to designate national competent authorities by August 2, 2025, to oversee enforcement, with the European AI Office and European Artificial Intelligence Board now operational. High-risk AI requirements will continue to phase in, with embedded high-risk AI systems in regulated products given until August 2, 2027, for full compliance. The Commission is maintaining its push for voluntary early adoption through the AI Pact, while also releasing standards, guidelines, and an AI Act Service Desk to support compliance.

ORIGINAL NEWS STORY:

EU’s Landmark AI Act Enters Into Force, Setting Global Standard for Artificial Intelligence Regulation

 

The European Artificial Intelligence Act (EU EU AI Act), the first comprehensive regulation on AI in the world, officially came into force today. This groundbreaking legislation aims to ensure that AI developed and used within the European Union is trustworthy and respects fundamental rights. The EU EU AI Act establishes a harmonized internal market for AI technologies in the EU, promoting innovation and investment while safeguarding human rights.

 

The EU AI Act introduces a forward-looking definition of artificial intelligence, categorizing AI systems based on the level of risk they pose. The regulation outlines four main risk categories: minimal risk, specific transparency risk, high risk, and unacceptable risk.

 

Managing Minimal and Transparency Risks

 

Minimal-risk AI systems, such as spam filters or recommender tools, are considered low-impact. These systems face no mandatory obligations, though developers may voluntarily follow best practices to increase transparency and public trust. Systems under specific transparency risk—including chatbots and AI-generated content—must inform users when they interact with a machine. Moreover, providers must clearly label synthetic content, such as deepfakes, in a machine-readable format. This labeling ensures that manipulated or generated content can be easily identified.

Strong Oversight for High-Risk Applications

 

High-risk AI systems, such as those used in credit scoring, recruitment, or autonomous robotics, face strict obligations. Providers must implement data quality checks, maintain documentation, log system activities, ensure human oversight, and follow cybersecurity standards. In addition, these systems must demonstrate high levels of reliability and accuracy. To support innovation, the EU will use regulatory sandboxes—controlled environments where organizations can test AI under supervision. These sandboxes are designed to help companies meet compliance goals while experimenting with new applications responsibly.

 

Banning Unacceptable Risk AI

 

AI systems that manipulate behavior or violate human dignity are banned. Examples include social scoring mechanisms, biometric emotion recognition in workplaces, and real-time facial recognition in public spaces. Some exceptions exist for law enforcement under strict safeguards. These bans aim to prevent technologies that could harm citizens or undermine democratic values. As a result, the EU AI Act sets a clear ethical boundary for AI development worldwide.

 

Addressing General-Purpose AI Models

 

The regulation also covers general-purpose AI (GPAI) models—advanced systems used across multiple applications. GPAI providers must ensure transparency across the supply chain and manage systemic risks linked to these powerful models. To support implementation, the European Commission has published guidelines and invited developers to join the AI Pact. This initiative encourages voluntary early compliance with key obligations ahead of legal deadlines.

 

Implementation and Enforcement

 

EU Member States have until August 2, 2025, to appoint national authorities responsible for applying the rules and monitoring compliance. Oversight will be coordinated by the European Artificial Intelligence Board and the newly established European AI Office. Non-compliance carries significant penalties. Companies could face fines of up to 7% of global annual turnover for violations involving banned AI and up to 3% for other breaches. Most provisions will take effect on August 2, 2026. However, bans on unacceptable-risk AI and GPAI rules are enforceable earlier, ensuring early protections.

 

Need Help?

 

If you have questions or concerns about the EU EU AI Act, or any global guidelines, regulations and laws, don’t hesitate to reach out to BABL AI. Their Audit Experts can offer valuable insight, and ensure you’re informed and compliant.

Subscribe to our Newsletter

Keep up with the latest on BABL AI, AI Auditing and
AI Governance News by subscribing to our news letter