European Parliament Study Calls for Expanded AI Liability Rules to Ensure Consumer Protection

Written by Jeremy Werner

Jeremy is an experienced journalist, skilled communicator, and constant learner with a passion for storytelling and a track record of crafting compelling narratives. He has a diverse background in broadcast journalism, AI, public relations, data science, and social media management.
Posted on 09/26/2024
In News

A recent study by the European Parliamentary Research Service (EPRS) has called for significant revisions to the European Commission’s proposed “Artificial Intelligence Liability Directive (AILD),” a legal framework designed to address the civil liability challenges posed by artificial intelligence (AI) systems. The study, published in September 2024 by the European Parliament, scrutinizes the Commission’s initial impact assessment and proposes expanding the liability rules to better protect consumers and harmonize AI governance across the European Union.

 

The introduction of AI systems like ChatGPT, autonomous vehicles, and algorithm-driven insurance platforms has reshaped industries, but it has also exposed gaps in traditional liability rules. The European Commission’s AILD proposal seeks to close these gaps, yet the EPRS study argues that the proposal falls short in several areas, especially in its limited scope and lack of clear, enforceable standards for AI-related harms.

 

Study Questions Effectiveness of Fault-Based Liability

 

The AILD seeks to hold AI developers and deployers accountable for damage caused by their systems. However, the EPRS argues that the directive relies too heavily on proving fault. Establishing fault is often difficult when dealing with AI systems that learn and adapt on their own. One of the study’s main critiques is the absence of strict liability—a legal approach that assigns responsibility regardless of fault. The EPRS recommends that the EU apply strict liability to high-impact AI systems, such as autonomous vehicles and AI-powered medical diagnostics, where the potential for harm is substantial.

“The traditional fault-based liability system may not be sufficient for dealing with the unpredictable and opaque nature of AI systems,” the study states. “There is a need for more robust mechanisms that ensure consumers are protected when AI systems fail or cause harm, especially in sectors where human life or financial stability is at risk.”

 

Expanding the Directive to Cover General-Purpose AI

 

The EPRS also calls for the AILD to cover general-purpose AI systems with widespread use. At present, the directive applies only to a narrow set of AI products. This excludes many tools that are widely deployed and could still cause harm. General-purpose AI, such as the model behind ChatGPT, is not explicitly covered under the current proposal. The EPRS argues that these systems pose similar risks to specialized AI and should face the same liability standards. Broadening the directive’s scope would ensure that consumers remain protected regardless of how or where AI systems are used.

 

Clarifying Overlaps with Product Liability Rules

 

Another concern is how the AILD interacts with the revised Product Liability Directive (PLD). The PLD applies strict liability to defective products, including AI-enabled devices, while the AILD focuses on fault-based liability. The study warns that without clarification, the two frameworks could overlap or conflict. The EPRS recommends that the Commission define how the AILD and PLD will complement each other. Clear boundaries would help businesses understand which framework applies and ensure that consumers can seek fair compensation for AI-related damages.

 

Avoiding Fragmentation Across the European Union

 

The study also highlights the risk of regulatory fragmentation. Because the AILD is a directive, member states would have flexibility in how they implement it. That flexibility could lead to a patchwork of national laws that weaken the EU’s single market. To prevent this, the EPRS suggests adopting a regulation instead of a directive. Regulations take effect uniformly across all member states, ensuring consistency. “Harmonizing AI liability rules across the EU is essential to ensure legal certainty for businesses and consumers alike,” the study argues. “A regulation-based approach would create a level playing field for AI developers and protect consumers, regardless of where they are in the EU.”

 

Comparing Global Approaches to AI Liability

 

The EPRS also compares the EU’s strategy to frameworks in other regions. California has already adopted strict liability for autonomous vehicles, while Canada is exploring similar options for AI used in healthcare and finance. The study suggests that the EU could learn from these models to design a more comprehensive approach that reflects the realities of AI risk.

A Call for a Proactive Approach

 

In conclusion, the EPRS urges the European Commission to take a proactive role in reforming AI liability. It recommends expanding the AILD’s scope, adding strict liability for high-impact AI systems, and moving toward a regulation-based model. These steps, the study says, would create stronger consumer protection and a consistent framework for AI governance across the European Union.

 

 

Need Help?

 

If you have questions or concerns about the EU’s AI proposals and global guidelines, reach out to BABL AI. Their Audit Experts can offer valuable insight, and ensure you’re informed and compliant.

 

Subscribe to our Newsletter

Keep up with the latest on BABL AI, AI Auditing and
AI Governance News by subscribing to our news letter