European Commission Withdraws AI Liability Directive: Implications for AI Liability in the EU

12 min read
Dutch version not available

On February 11, 2025, the European Commission announced in its 2025 work programme the withdrawal of the AI Liability Directive due to lack of consensus among stakeholders. This decision has far-reaching implications for how AI-related damage and liability are addressed within the EU.

The withdrawal of the AI Liability Directive leaves a significant gap in the European AI legal framework, with AI liability largely falling under national legislation potentially leading to inconsistent outcomes between member states.

Background: The Original Purpose of the AI Liability Directive

The European Commission presented the AI Liability Directive (AILD) on September 28, 2022, as part of an ambitious dual proposal alongside the revision of the Product Liability Directive. This directive formed an essential component of Europe's strategy to create a comprehensive legal framework for AI-related liability within the European Union.

The original purpose of the AILD was to address fundamental shortcomings in existing liability legislation when dealing with damage caused by AI systems. The Commission recognized that current national liability rules, primarily based on traditional fault concepts, were inadequate for the complex reality of modern AI technology.

The complexity of AI liability

Traditional liability rules require victims to demonstrate wrongful action by an identifiable person who caused the damage. This principle, which has functioned for centuries for traditional products and services, fundamentally conflicts with the nature of AI systems. Artificial intelligence operates autonomously, learns from data, and makes decisions based on complex algorithms that are difficult for even their developers to comprehend.

The so-called "black box" problem forms a central challenge here. Many AI systems, particularly complex machine learning models, function in ways that are opaque to outsiders and sometimes even to their creators. This opacity makes it extremely difficult for victims of AI-related damage to demonstrate how and why an AI system made a particular decision that led to harm. The burden of proof thereby becomes prohibitively expensive, making legitimate claims practically unattainable.

Furthermore, a lack of harmonization between EU member states threatened to lead to a fragmented legal landscape. Different national approaches would not only lead to increased costs for companies active in multiple EU markets, but also to legal uncertainty for consumers about their rights in case of AI-related damage. The AILD was intended to prevent this fragmentation by introducing uniform rules.

Reasons for Withdrawal

The Commission cited "no foreseeable agreement" as the main reason for withdrawing the directive in February 2025.

The decision to withdraw the AI Liability Directive was not unexpected, but was the result of a lengthy political and legislative process in which various stakeholders fundamentally disagreed about the necessity and design of the directive.

Industry coalition against the directive

On January 29, 2025, just weeks before the Commission announced its decision, an influential coalition of 12 industry organizations, including MedTech Europe, published a joint statement calling for the withdrawal of the AILD. This coalition, representing a broad spectrum of sectors, expressed fundamental objections to the proposed directive.

The industry organizations warned that the AILD would lead to legal complexity that could harm the competitiveness of the European Union. They argued that the proposed rules would mean increased administrative and legal burden for companies developing or deploying AI technology, which could stifle innovation at a time when Europe is trying to strengthen its position as an AI leader.

Furthermore, the industry feared that uncertainty around potential liability could discourage investment in AI technology. In a sector where capital-intensive research investments are essential for breakthroughs, any factor that increases investment risks could lead to a relocation of AI innovation to jurisdictions with less stringent liability regimes.

Parliamentary division

A divided situation also emerged within the European Parliament. The Internal Market and Consumer Protection Committee, which played a central role in evaluating the directive, considered the adoption of the AILD premature and unnecessary. This committee argued that existing legal instruments, combined with the recently adopted AI Act and the revised Product Liability Directive, might be sufficient to adequately address AI-related liability.

However, not all MEPs shared this view. CDU MEP Axel Voss, a prominent voice in AI legislation within Parliament, called the withdrawal "a disaster for European companies and citizens". Voss and other supporters of the directive argued that the withdrawal represented a missed opportunity for Europe to lead in creating a fair and transparent liability structure for AI systems.

Supporters of the directive in Parliament referred to increasing pressure from the technology industry for regulatory simplification as an important factor in the Commission's decision. This dynamic illustrates the tension between the desire to create a robust legal framework for consumer protection on one hand, and the need to keep Europe attractive for AI investments on the other.

Implications for AI Liability Within the EU

The emergence of a fragmented legal landscape

The withdrawal of the AI Liability Directive has far-reaching implications for how AI liability is addressed within the European Union. Instead of a harmonized EU-wide system, AI liability will now primarily fall under the national legislations of the 27 member states, each with their own legal traditions, procedures, and interpretations of liability law.

This fragmentation creates a complex legal landscape in which organizations developing or implementing AI systems must now navigate through a maze of different national regulations. Where the AILD would have introduced uniform EU rules, we now see a situation where 27 different national systems may apply, depending on the jurisdiction in which damage occurs or where a lawsuit is filed.

AspectWith AILDAfter Withdrawal
HarmonizationUniform EU rules27 different national systems
Burden of ProofSimplified proceduresNational variations
Legal UncertaintyReducedIncreased

The burden of proof, one of the most complex aspects of AI liability, will now vary by country. Some member states may have more progressive approaches that alleviate the burden of proof for victims, while other countries may adhere to traditional liability principles that make it more difficult for victims to successfully file a claim. This variation in procedures and standards significantly increases legal uncertainty for both businesses and consumers.

Sector-specific implications

The implications of the withdrawal manifest differently depending on the sector and type of AI application. Professional AI applications, which typically fall outside the scope of the Product Liability Directive, now remain fully subject to national legislation. This means that Business-to-Business AI applications, such as AI systems used in the financial sector, healthcare, or industrial automation, face different liability regimes depending on the country in which they are used.

For consumer AI products, the situation is somewhat different. These products remain partially covered under the revised Product Liability Directive, which was adopted in October 2024. This directive provides some protection for consumers who suffer damage from defective AI-enabled products. However, significant gaps remain, particularly for non-pecuniary damage and pure economic losses, which often fall outside the scope of product liability.

These sectoral differences create an uneven playing field where consumers are better protected than professional users in some cases, while the opposite may be true in other situations, depending on the specific national legislation that applies.

Context: AI Act and Product Liability Directive Revision

The complex relationship with the AI Act

The AI Liability Directive was originally designed as an essential complement to the European AI Act, which came fully into force in August 2024. These two legislative instruments had different but closely related objectives that together would form a comprehensive legal framework for AI.

The AI Act focuses primarily on regulating AI development and implementation, with strict requirements for high-risk AI systems, prohibitions on certain AI practices, and governance structures for general-purpose AI systems. The law establishes technical standards, conformity assessments, and oversight mechanisms to ensure that AI systems are safe and reliable before they are placed on the market.

The AILD would have focused on the individual rights of persons who suffer damage after AI systems are already in use. Where the AI Act works preventively by setting rules for AI development, the AILD would have worked reactively by providing clear procedures for compensation when AI systems cause damage despite all preventive measures.

Although the AILD has been withdrawn, national courts will still frequently refer to EU legislation - particularly the AI Act - in their rulings, governed by EU legal principles such as the principle of effectiveness.

This complementarity means that the withdrawal of the AILD leaves an important gap in the European AI legal ecosystem. The AI Act contains no specific provisions on individual compensation, while the AILD would not have contained substantive rules on AI development. Together they would have formed a complete system; separately, important gaps remain.

Interestingly, national courts, despite the withdrawal of the AILD, will still refer to the AI Act when assessing AI liability cases. The classifications, definitions, and risk assessments from the AI Act will likely play an important role in national lawsuits, governed by EU legal principles such as the principle of effectiveness that requires national procedures to provide effective legal protection.

The success of the Product Liability Directive

While the AI Liability Directive was withdrawn, the revised Product Liability Directive has had a completely different fate. This directive was successfully approved and adopted into EU law in October 2024, representing an interesting contrasting development.

The revised Product Liability Directive focuses primarily on consumer AI products and introduces important adaptations to traditional product liability law to better deal with the specificities of AI-enabled products. The directive recognizes that traditional product liability, based on defects in products, needs adaptation for software-intensive products that can change after sale through software updates and machine learning.

However, the scope of the Product Liability Directive is deliberately limited and excludes important categories. Professional AI applications, Business-to-Business transactions, and certain forms of damage such as pure economic losses largely fall outside the scope of this directive. This means that the Product Liability Directive, although successfully adopted, offers only a partial solution to AI liability issues.

Future Perspectives and Recommendations

The search for alternative approaches

The European Commission has reserved the right after withdrawing the AILD to assess whether and how AI liability should be addressed at EU level in the future. This open stance suggests that the Commission recognizes that the problems the AILD tried to solve have not disappeared with the withdrawal of the proposal.

Various alternative approaches are conceivable for the future. The Commission could choose a more phased approach, first looking at specific sectors or uses of AI where liability problems are most urgent. Alternatively, the focus could be shifted to strengthening existing instruments, such as further adaptations to the Product Liability Directive or facilitating voluntary industry standards for AI liability.

Another possibility is that the Commission chooses a more technical approach, focusing on improving AI transparency and explainability to reduce burden of proof problems, rather than directly adapting liability rules. This could be achieved through additional technical standards under the AI Act or by stimulating research into "explainable AI" technologies.

Strategic recommendations for different stakeholders

For companies developing or implementing AI technology, the situation after the withdrawal of the AILD becomes significantly more complex. These organizations must now navigate through a fragmented legal landscape that brings different risks and compliance requirements in different EU member states.

Companies should invest in robust internal compliance programs that account for legal variations between different EU markets. This requires not only legal expertise in multiple jurisdictions, but also a dynamic approach that can anticipate changing national legislation. Additionally, investing in transparent and explainable AI systems is not only an ethical choice, but also a practical strategy to reduce liability risks by making it easier to demonstrate that AI systems function properly.

For lawyers and compliance specialists, a new specialization emerges in navigating cross-border AI liability. These professionals must closely monitor national developments and develop legal strategies that account for potential jurisdiction shopping by claimants. Focusing on AI Act compliance can serve as a fundamental basis for liability reduction, as compliance with AI Act requirements will likely be a positive factor in national lawsuits.

The withdrawal of the AILD means that organizations deploying AI now face a patchwork of national regulations, which could significantly increase compliance costs and legal risks.

Final Thoughts

The withdrawal of the AI Liability Directive marks a significant setback in the European ambition to create a comprehensive legal framework for AI. While the AI Act establishes technical standards and implementation requirements, the crucial question of civil liability remains largely unanswered at EU level.

This creates a paradoxical situation: Europe has strict rules for AI development and use, but no harmonized rules for when these systems cause damage. For lawyers, policymakers, and compliance specialists, this means a period of increased uncertainty and the need to develop expertise in multiple national legal systems.

The coming months will be crucial to see whether the Commission comes up with an alternative, or whether member states individually develop their own AI liability regimes - with all the fragmentation that entails.

Date of withdrawal: February 11, 2025. Impact: Increased legal fragmentation and uncertainty for AI liability within the EU.