UPDATE — AUGUST 2025: The AI Liability Directive (AILD) is a real proposal, first introduced by the European Commission on September 28, 2022. Its goal is to harmonize civil liability rules across the European Union (EU), making it easier for victims of AI-related harm to claim compensation. The directive complements the EU AI Act, which regulates AI risks but does not address liability or damages.
As of August 2025, the EU AI Act has already been adopted (June 2024) and entered into force in August 2024, with compliance deadlines phasing in between 2025 and 2027. The AI Liability Directive, however, remains under negotiation by the European Parliament and Council of the EU. Once adopted, it will need to be transposed into national laws—meaning its real-world effects will take time to materialize.
ORIGINAL NEWS STORY:
EU Advances AI Liability Directive to Strengthen Accountability and Harmonize Civil Liability Rules
The European Union is proposing an AI Liability Directive, aiming to bridge critical gaps in the regulation of artificial intelligence (AI) across its member states. The directive, officially known as the Proposal for a Directive of the European Parliament and of the Council on Adapting Non-Contractual Civil Liability Rules to Artificial Intelligence, introduces specific rules to address the unique challenges posed by AI in terms of civil liability.
Bridging Gaps in AI Accountability
The European Union’s AI Liability Directive aims to close critical gaps in how AI-related harms are handled under civil law. Officially titled the Proposal for a Directive on Adapting Non-Contractual Civil Liability Rules to Artificial Intelligence, the measure introduces specific rules to address the unique challenges AI poses for liability and accountability. This proposal complements the EU AI Act, which focuses on preventing AI risks but not on compensating victims. Even with strong safeguards, incidents can still occur. The AI Liability Directive ensures that people affected by AI-related harm can pursue fair compensation while fostering consistent legal expectations across the EU.
Reducing the Burden of Proof for Victims
The directive introduces rebuttable presumptions—a key innovation designed to ease the burden of proof for victims. Traditionally, plaintiffs must prove that the damage they suffered was directly caused by another party’s fault. However, this is often difficult with AI systems due to their complexity and opacity. Under the new proposal, courts will be allowed to presume fault in specific circumstances, shifting part of the evidentiary burden to those responsible for the AI system. This change makes it easier for individuals to bring claims and strengthens overall accountability.
Creating Consistency Across Member States
Another major goal of the directive is to harmonize national legal approaches to AI liability. Currently, each EU member state interprets liability rules differently, creating uncertainty for businesses and consumers alike—especially in cross-border cases. By establishing uniform standards, the AILD promotes predictability and trust in AI technologies. It also helps reduce the legal friction that has slowed AI adoption in sectors like healthcare, manufacturing, and transportation.
Improving Access to Evidence
The proposal includes important provisions for evidence disclosure in AI-related cases. Courts will have the power to compel parties holding relevant data—such as AI developers or operators—to share it with claimants. This rule is especially crucial for high-risk AI systems, where transparency is essential to proving fault. It ensures that those harmed by AI are not disadvantaged simply because they lack access to technical documentation or datasets.
Supporting Innovation Through Legal Certainty
By clarifying liability rules, the directive also supports innovation. Small and medium-sized enterprises (SMEs), in particular, stand to benefit. They often struggle to navigate fragmented liability regimes and bear disproportionate compliance costs. A consistent legal framework will help SMEs innovate with confidence while upholding public trust in AI. In addition, the directive’s emphasis on legal certainty should encourage broader AI investment and responsible deployment across EU’s AI sector.
Need Help?
If you have questions about the EU’s AI Liability Directive, the EU AI Act, or any other global AI regulation, contact BABL AI. Their Audit Experts can help you understand new compliance requirements, assess risks, and align your organization’s AI governance strategy with international standards.

