European Parliament Study Warns Overlapping AI Laws Could Hinder Innovation, Urges Clearer Rules and Flexibility

Written by Jeremy Werner

Jeremy is an experienced journalist, skilled communicator, and constant learner with a passion for storytelling and a track record of crafting compelling narratives. He has a diverse background in broadcast journalism, AI, public relations, data science, and social media management.
Posted on 11/11/2025
In News

According to a new European Parliament study examining how the EU AI Act interacts with the broader digital regulatory framework finds that Europe’s fast-expanding AI rulebook may unintentionally create friction, compliance uncertainty, and slower innovation if lawmakers do not streamline the system.

 

The 108-page study — Interplay between the AI Act and the EU digital legislative framework — was commissioned by the Parliament’s Committee on Industry, Research and Energy (ITRE). It assesses how the EU AI Act overlaps with landmark data and technology laws such as the GDPR, the Data Act, Cyber Resilience Act, and Digital Services Act.

 

Researchers conclude that the Act aims to function simultaneously as a product-safety regime, a fundamental-rights safeguard, and an innovation policy tool — an ambition that introduces contradictory expectations for companies and national regulators. The report warns that applying traditional product-conformity systems to fast-evolving AI could prove difficult, particularly where fundamental rights are affected.

 

Compliance Load vs. Innovation Goals

 

The Parliament-commissioned analysis highlights four tensions:

 

  • Product standards vs. fundamental rights protections — CE-marking and conformity assessment frameworks may not translate well to issues such as discrimination or civil liberties.
  • Horizontal law vs. fragmented enforcement — the AI Act must operate alongside several other digital laws without clear carve-outs, creating overlapping oversight authority across EU institutions.
  • Static rules vs. dynamic AI systems — requirements tied to fixed high-risk categories may fail to keep pace as models update continuously, encouraging risk acceptance instead of risk mitigation.
  • Compliance burden vs. viable innovation — heavy documentation and continuous monitoring could deter European companies. In July 2025, 45 major firms — including Airbus, TotalEnergies, BNP Paribas, Siemens, and AI startups Mistral AI and Pigment — urged the Commission to “stop the clock” and postpone enforcement.

 

Global Context: U.S. and China Take Contrasting Paths

 

The study situates the EU effort within intensifying global competition. It contrasts Europe’s unified but heavy regulatory structure with the U.S., where a decentralized, market-driven approach relies heavily on self-regulation and sector-specific rules, and with China’s hybrid system that combines investment, rapid deployment, and selective enforcement to support industrial policy.

 

The report also notes geopolitical pressure shaping AI regulation: trade tensions have led the U.S. and China to impose export controls and sanctions on strategic technologies such as semiconductors.

 

Risk-Tier System and Penalties

 

The report reiterates the AI Act’s tiered risk system and its strong enforcement posture. High-risk systems will require transparency, human oversight, and risk assessments, and must be registered in an EU-wide database. Companies that violate the Act may face fines up to €35 million or 7% of global revenue — among the highest penalties ever imposed for technology regulation.

 

Recommendations: More Coordination, Greater Flexibility

 

To reduce regulatory friction, the authors recommend:

 

  •  increased coordination across EU agencies,
  •  clarification of overlapping obligations with GDPR and the Data Act, and
  •  potential evolution of the AI Act toward more dynamic or “adaptive” regulation.

 

The study includes an annex mapping every interaction between the Act and EU digital laws, with suggested updates where conflicts may arise.

 

Bottom Line

 

The report underscores a paradox: Europe wants to lead the world in “trustworthy AI,” but may risk slowing down the very innovation it seeks to encourage.

 

As the global race to define AI rules accelerates, the EU faces a critical test: whether it can maintain strong protections without pushing AI development elsewhere.

 

Need Help?

 

If you have questions or concerns about these, or any global guidelines, regulations and laws, don’t hesitate to reach out to BABL AI. Their Audit Experts can offer valuable insight, and ensure you’re informed and compliant.

 

Subscribe to our Newsletter

Keep up with the latest on BABL AI, AI Auditing and
AI Governance News by subscribing to our news letter