European Parliament Study Calls for Expanded AI Liability Rules to Ensure Consumer Protection

Written by Jeremy Werner

Jeremy is an experienced journalists, skilled communicator, and constant learner with a passion for storytelling and a track record of crafting compelling narratives. He has a diverse background in broadcast journalism, AI, public relations, data science, and social media management.
Posted on 09/26/2024
In News

A recent study by the European Parliamentary Research Service (EPRS) has called for significant revisions to the European Commission’s proposed “Artificial Intelligence Liability Directive (AILD),” a legal framework designed to address the civil liability challenges posed by artificial intelligence (AI) systems. The study, published in September 2024 by the European Parliament, scrutinizes the Commission’s initial impact assessment and proposes expanding the liability rules to better protect consumers and harmonize AI governance across the European Union.

 

The introduction of AI systems like ChatGPT, autonomous vehicles, and algorithm-driven insurance platforms has reshaped industries, but it has also exposed gaps in traditional liability rules. The European Commission’s AILD proposal seeks to close these gaps, yet the EPRS study argues that the proposal falls short in several areas, especially in its limited scope and lack of clear, enforceable standards for AI-related harms.

 

The Commission’s AILD aims to hold developers and deployers of AI accountable for the damage their systems may cause. However, the EPRS study contends that the proposal does not go far enough in addressing the distinct risks posed by AI. The study highlights that while the AILD introduces essential principles of liability, it relies heavily on proving fault—an often complex and difficult process when dealing with AI systems that are designed to learn and adapt autonomously.

 

One of the study’s primary critiques is the lack of a thorough exploration of strict liability—a legal framework where liability is assigned regardless of fault or negligence. The EPRS argues that strict liability should be considered for high-impact AI systems, such as autonomous vehicles and AI-driven medical diagnostics, where the risks of harm to individuals are significant.

 

“The traditional fault-based liability system may not be sufficient for dealing with the unpredictable and opaque nature of AI systems,” the study states. “There is a need for more robust mechanisms that ensure consumers are protected when AI systems fail or cause harm, especially in sectors where human life or financial stability is at risk.”

 

The study also calls for an expansion of the AILD to cover general-purpose AI systems and those with widespread, high-impact applications. Currently, the directive applies only to a narrow range of AI products, leaving out potentially dangerous and widely used systems. General-purpose AI, such as the language model behind ChatGPT, is not explicitly covered under the current directive, despite its rapid integration into consumer services.

 

The EPRS study suggests that these general-purpose AI systems pose as much risk as specialized systems and should therefore be subject to similar liability rules. By broadening the scope of the AILD, the EU could ensure that consumers are protected from unexpected AI-driven harms, regardless of the AI system’s initial purpose.

 

Another critical issue identified in the study is the relationship between the AILD and the Product Liability Directive (PLD), which has also been revised to address AI. While the AILD focuses on fault-based liability, the PLD applies strict liability rules to defective products, including AI-enabled devices. However, the study points out that the two frameworks may overlap or conflict in their application, leading to legal uncertainty.

 

The EPRS recommends that the Commission clarify how the AILD and PLD will work together to ensure consistent and fair outcomes for AI-related liability cases. Without such clarification, companies may face confusion over which legal framework applies, and consumers may struggle to seek compensation for AI-related damages.

 

The study also warns of the risks of fragmentation in the EU’s AI liability framework. By leaving significant discretion to individual member states, the AILD could create a patchwork of differing regulations that undermine the single market. To avoid this, the EPRS advocates for a shift from a directive-based approach to a regulation-based framework. Regulations are directly applicable across all member states, ensuring uniformity and reducing the risk of market fragmentation.

 

“Harmonizing AI liability rules across the EU is essential to ensure legal certainty for businesses and consumers alike,” the study argues. “A regulation-based approach would create a level playing field for AI developers and protect consumers, regardless of where they are in the EU.”

 

The EPRS study also compares the EU’s approach to AI liability with frameworks in other regions, including the United States and Canada. The study notes that California has already introduced strict liability rules for autonomous vehicles, while Canada is exploring similar options for AI systems in healthcare and finance. The study suggests that the EU could learn from these international examples to create a more comprehensive liability framework that addresses the unique challenges posed by AI.

 

In its conclusion, the study emphasizes the need for the EU to take a proactive approach in regulating AI liability, particularly as AI systems become more integrated into daily life. It calls on the European Commission to consider expanding the scope of the AILD, introducing stricter liability rules for high-impact AI systems, and moving toward a regulation-based framework to avoid legal fragmentation.

 

 

Need Help?

 

If you have questions or concerns about the EU’s AI proposals and guidelines, or any global guidelines, regulations and laws, don’t hesitate to reach out to BABL AI. Their Audit Experts can offer valuable insight, and ensure you’re informed and compliant.

 

Subscribe to our Newsletter

Keep up with the latest on BABL AI, AI Auditing and
AI Governance News by subscribing to our news letter