The EC is proposing an AI Liability Directive to provide assistance.

Ida Varošanec (PhD student, University of Groningen) and Nynke Vellinga (post-doc researcher, University of Groningen)

Photo credit: Cryteria, via Wikimedia commons

1. Proposal Goals

The European Commission’s proposal for an AI Liability Directive, published on September 28, 2022, alongside an updated Product Liability Directive, aims to address the risks posed by artificial intelligence (AI) while acknowledging its potential benefits. The proposal stems from a report recognizing the potential harm AI systems can cause due to their connectivity, unpredictable nature, and opacity, which makes it difficult to assign responsibility for AI-related incidents.

To tackle these challenges, the AI Liability Directive seeks to provide legal protection for victims of AI-related harm comparable to victims of harm caused by other products. It aims to build trust in AI technologies, promote their adoption, and harmonize the EU’s legal landscape for AI. This legislation will work in conjunction with other AI regulations, such as the proposed AI Act, to create a comprehensive legal framework for AI liability within the EU.

2. Scope of the AI Liability Directive

Contrary to its name, the proposed AI Liability Directive doesn’t introduce new grounds for civil liability beyond what national laws already cover, except for cases related to defective products under the Product Liability Directive. Instead, it establishes rules for disclosing evidence and presumptions of causality in cases where fault-based liability arises from the use of an AI system.

Due to the complexity of AI systems, proving fault in such cases can be difficult and costly, potentially disadvantaging individuals harmed by AI systems. To address this, the directive focuses on:

(a) Mandating the disclosure of evidence related to high-risk AI systems to help claimants establish fault-based civil liability claims for damages.

(b) Shifting the burden of proof in fault-based civil liability claims related to AI system-caused damages brought before national courts.

While the directive doesn’t directly apply to risk-based liability claims, the proposed Product Liability Directive includes similar rules for evidence disclosure and burden of proof.

The scope of the AI Liability Directive is partially limited to “high-risk AI systems,” as defined by the proposed AI Act. The AI Act categorizes AI systems based on their risk level, with high-risk systems either being subject to third-party assessment under existing sectoral legislation or identified as high-risk due to their application in specific areas like transportation or education. The directive’s rules on evidence disclosure solely apply to these high-risk AI systems, while its rules on the burden of proof extend to claims concerning all types of AI systems.

3. Disclosures and Presumptions Under the AI Liability Directive

3.1 Rebuttable Presumption of a Causal Link

Article 4 of the directive introduces a “rebuttable presumption of a causal link” in fault-based liability cases. This means courts can assume a causal relationship between the defendant’s fault and the output (or lack thereof) of an AI system if three conditions are met: the fault (breach of duty of care under EU or national law) is established, it is highly likely that this fault influenced the AI system’s output, and damage caused by an AI system is demonstrated. The directive further distinguishes between providers and users of AI systems in subsequent paragraphs.

This presumption of a causal link concerning high-risk AI systems can be challenged if the defendant proves that sufficient evidence and expertise are reasonably accessible for the claimant to prove the causal link themselves. For non-high-risk AI systems, this presumption only applies when proving the causal link is deemed excessively difficult for the claimant by the national court. Regardless of the type of AI system, the defendant retains the right to refute any presumption regarding the causal link.

3.2 Disclosure of Evidence

Article 3 of the directive outlines the conditions for disclosing evidence related to high-risk AI systems suspected of causing harm, establishing a rebuttable presumption of non-compliance if such evidence is withheld.

This provision empowers courts to order the disclosure of relevant evidence about the specific high-risk AI system in question. However, this disclosure isn’t absolute and is subject to a proportionality assessment, considering the legitimate interests of all parties involved, including the protection of trade secrets and confidential information.

The directive emphasizes striking a balance between the claimant’s right to access information and the need to safeguard sensitive information. National courts are granted the authority to implement measures protecting trade secrets during and after proceedings, ensuring a fair trial for all while preserving the confidentiality of sensitive information.

4. Commentary

The EU’s effort to level the playing field between individuals and AI system developers by addressing the information asymmetry is commendable. Holding developers liable and ensuring compensation can encourage them to prioritize the safe and responsible development of AI systems. This directive, along with the Product Liability Directive, the proposed AI Act, and other product safety regulations, represents a comprehensive approach to regulating AI systems and their potential risks.

However, the AI Liability Directive has a critical flaw: it offers defendants a way to sidestep evidence disclosure. If they refuse to disclose trade secret information about their AI system, a presumption of non-compliance with their duty of care arises. This creates a loophole where defendants might choose to pay compensation to avoid disclosing sensitive information, potentially prioritizing financial considerations over transparency and accountability.

This contradicts the AI Act’s push for transparency in high-risk AI systems. By offering a way to circumvent transparency, the AI Liability Directive undermines the AI Act’s objectives, creating tension between these two instruments. The European Commission missed an opportunity to establish a clear stance on transparency by extending the AI Act’s transparency requirements to the AI Liability Directive.

This lack of mandatory transparency carries another disadvantage. When developers can avoid disclosing crucial information by paying compensation, the incentive to improve AI systems diminishes. This lack of transparency could hinder innovation in the field and erode public trust in AI technologies. The current approach might ultimately impede the development and refinement of safer, more reliable AI systems.

Licensed under CC BY-NC-SA 4.0