On September 19, 2024, the European Parliamentary Research Service (EPRS) published its Complementary Impact Assessment of the Proposal for a directive on adapting non-contractual civil liability rules to artificial intelligence (AI Liability Directive, AILD), as requested by the Committee on Legal Affairs (JURI). The AI Liability Directive had been proposed by the European Commission on September 28, 2022, but the legislative process had later been suspended until the adoption of the related Artificial Intelligence Act (AI Act), which was formally signed on June 13, 2024. Alongside the ex-ante obligations imposed by the AI Act for providers and deployers of Artificial Intelligence (AI) systems, ex-post measures are introduced by both the AILD and the revised Product Liability Directive (revised PLD) adopted by the Council on October 10, 2024 – which now encompasses software as a product and “destruction or corruption of data that are not used for professional purposes” as a damage.
The AILD aims to facilitate access to redress for harm caused by the use of an AI system, by alleviating the burden of proof in relation to fault-based liability. To that aim, it introduces an evidence disclosure mechanism (AILD, Art. 3) and two rebuttable presumptions – regarding non-compliance (AILD, Art. 3) and causality in case of fault (AILD, Art. 4). The AILD is aligned to the AI Act’s risk classification of AI systems, with some of the AILD’s main principles applying solely to high-risk AI systems (HRAIS) as defined under the AI Act. This is the case of the evidence disclosure mechanism and part of the burden of proof alleviation.
Among other suggestions, the EPRS Complementary Impact Assessment of the AILD evaluates the adequacy of a potential strict liability regime, a potential software liability regime, the current AI definition, the current risk-classification (considering potential extensions of the scope), and amendments to the evidence disclosure mechanism. The following sections explore these proposed AILD amendments, discussing the implications of such changes in the overarching context of the EU’s framework for AI.
Response to the European Commission’s AILD Impact Assessment: pros and cons of strict liability
The EPRS Complementary Impact Assessment includes an appraisal of the European Commission’s AILD Impact Assessment. The EPRS tackles the different policy options set out in the Commission’s Impact Assessment, including that of a strict liability regime for HRAIS, with mandatory insurance for liable actors. Already in the European Parliament resolution of 2020 on AI liability, strict liability had been envisaged for HRAIS for mere actions causing harm, also mandating insurance requirements (EP resolution, Art. 4(4)) and financial caps for damages (EP resolution, Art. 5)).
The EPRS assesses the option of adopting strict liability in the AILD, highlighting the need to balance potential benefits (simplification of compensation processes, cost internalization, effective enforcement) and risks (chilling effects on AI innovation and deployment – especially for SMEs –, increased vexatious litigation – especially for immaterial harm –, indirect detriments to fundamental rights – especially through reduced offer of beneficial AI). The latter could also be argued to imply risks of the AI market being limited to ‘less safe options’ developed and deployed in jurisdictions that do not offer the same protections as those enjoyed under the EU framework. In highlighting the different interests to be balanced, the EPRS fails to propose a solution that would allow to reach this equilibrium.
Interaction with the AI Act and revised PLD: defining the AILD’s scope in terms of AI and software
The EPRS recalls the importance of aligning core AILD concepts to those adopted in the approved version of the AI Act (such as ‘deployer’ rather than ‘user’). However, it offers a critique on the AI Act’s definition of AI, and its application to the AILD. The EPRS proposes to either extend the AILD’s scope such that it would encompass all software as per the PLD, or accompany the definition of AI (as per the AI Act) by guidelines delineating which systems qualify as AI under the law, as opposed to traditional software. It could be argued that this objective could be better fulfilled by a checklist of technical characteristics (such as system autonomy and self-learning capabilities), part of an Annex subject to further modifications and updates based on a specific process and timeline.
In addition to amending the AILD’s definition of AI, the assessment proposes encompassing software under the proposal’s scope. The reasoning stems from gaps in the revised PLD that the EPRS deems could be filled by an amended AILD. For instance, while the revised PLD’s personal scope is limited to professional users of AI, that of the AILD extends to non-professional users of AI. Additionally, the EPRS argues that the AILD would also expand the burden of proof alleviations to cases of discrimination, violations of personality rights, other fundamental rights, professionally used property, pure economic loss and sustainability harms.
As the AILD would only facilitate redress in cases involving AI, the EPRS calls for software liability rules. These would include evidence disclosure mechanisms applying to all types of software, and a rebuttable presumption for non-AI software equivalent to that for non-high-risk AI systems – meaning applicable only where the court considers it “excessively difficult for the claimant to prove the causal link”. Several persisting gaps can be identified.
First, for non-high-risk AI systems, this assessment of “excessive difficulty” is done based on AI systems’ characteristics such as autonomy or opacity. As this assessment based on AI characteristics would not apply to non-AI software, it remains unclear how courts would determine excessive difficulty in proving causality. In addition to this legal uncertainty, unintended consequences – including chilling effects on innovation – could arise from the different treatment reserved for non-AI systems (for which developers have no specific obligations to rely on to ensure compliance), with respect to HRAIS (for which developers can rely on specific obligations under the AI Act).
Second, in relation to the AILD, the definition of damage eligible for compensation is left to each Member State’s civil law system. This applies to legal interests such as violations of personal dignity, respect for private and family life, the right to equality and non-discrimination. In the case of the latter, the EPRS argues that AILD applicability should be made clear.
Third, such damages could still arise from product defect, yet no alleviations would be available under the revised PLD. This would be the case, for instance, if discrimination were to arise due to defect of an AI product used for automated decision-making.
Interaction with the AI Act: gaps arising from the AILD’s reliance on the risk-classification
The EPRS highlights how the AILD’s focus on HRAIS leads to gaps in terms of ex-post measures for AI systems presenting unacceptable risk, individual risk, or so-called high impact. First, as the AILD focuses on HRAIS, it lacks ex-post mechanisms for AI systems that are prohibited under the AI Act because they pose “unacceptable risks” (AI Act, Art. 5). In theory, by being banned, these systems are considered to not present a possibility for damage. In practice, infringements of the ban could lead to harm requiring compensation. So far, “non-compliance with the prohibition” only leads to administrative fines under the AI Act. The EPRS recommends that the AILD should include either strict liability for damages caused by prohibited AI systems (where only causality between deployment and damage needs to be established), or automatic non-rebuttable presumptions of fault and causality (between fault and AI outcome).
The EPRS Complementary impact assessment also suggests amendments to the AILD’s scope, criticizing its focus on HRAIS. It claims that the AI Act’s high-risk classification, by accounting only for societal risks and not individual risks, fails to account for all AI applications presenting significant liability risks to individuals. The EPRS suggests introducing a new category for ‘high-impact AI systems’, including general-purpose AI (GPAI).
As such, the evidence disclosure mechanism and causality presumption (between a violation of the safety rules under the AI Act, Art. 55 and harmful output) would apply to GPAI systems. The presumption would apply to a violation of the obligations of model evaluation (AI Act, Art. 55 para. 1.a), documentation (AI Act, Art. 55 para. 1.c), and mitigation of possible systemic risks (AI Act, Art. 55 para. 1.b). So far, the latter leaves leeway for interpretation, highlighting the crucial role of the General-Purpose AI Code of Practice to be published by the AI Office.
Indeed, the Code of Practice would set out the systemic risk taxonomy, assessment and mitigation measures. Without such a code, it would be unclear what measures would be considered sufficient to « mitigate possible systemic risks », especially due to the unpredictability of general-purpose AI models: as such, extending the causality presumption to GPAI would mean applying it to cases where all uses, and thus all risks, cannot be foreseen by the system providers. A first draft of the Code of Practice was published on 14 November 2024. The drafting of a clear, easy to implement Code of Practice – which providers could rely on to demonstrate compliance with the AI Act – would be crucial to the feasibility of classifying GPAI as ‘high-impact’ AI systems.
Amendments to evidence disclosure: implications for AI explainability and competitive advantage
The impact assessment proposes other amendments to the AILD’s mechanisms to overcome opacity. Specifically, it proposes to align the AILD’s evidence disclosure mechanism with that of the PLD, specifying that the disclosure must be “presented in a manner easily understandable to non-experts, such as consumers or their legal counsel”. In support of this proposed amendment, it can be further argued that such specification to the evidence disclosure mechanism might imply higher thresholds of AI explainability, a requirement which one can argue to be lacking from the EU’s current framework on AI governance. To further strengthen this mechanism, specific ex-ante obligations on explainability could be beneficial, in turn facilitating evidence disclosure in an “easily understandable” format.
On another side, the EPRS impact assessment suggests lowering the barrier for triggering evidence disclosure with respect to claims by non-competitors, who would only have to demonstrate the occurrence of damage and the involvement of an AI system. This lowered barrier would however raise questions in terms of the definition of non-competitors versus potential competitors, as well as the tension between the necessity for AI explainability and the protection of Intellectual Property as an incentive for innovation. If the barrier were to be lowered, it would require strict definitions of non-competitors, accounting for the capacity of claimants to acquire the status of competitors in the future, as well as strict prohibitions limiting the reuse of the information for further purposes and its disclosure to further parties. In the absence of such measures, a lowered barrier would pose threats to the defendant’s competitive advantage in the market.
Conclusion
In its Complementary Impact Assessment, the EPRS raises several concerns regarding the proposed AILD. The proposed extension of the AILD’s scope to prohibited AI systems and systems posing individual risk seem necessary. Other proposed amendments would require further measures: including GPAI under the so-called high-impact AI category would highlight the necessity of a clearly defined Code of Practice for GPAI; including software under the AILD’s scope would require clearer ex-ante obligations and rules on the application of the rebuttable presumption for non-AI software. Regarding the proposed amendments to the AI definition, it can be argued that a checklist of technical characteristics could better fulfill this objective. Similarly, the proposed alleviation for triggering evidence disclosure would need to be very carefully implemented, accounting for risks to information of high business value. As for the proposed “understandability” requirement for evidence disclosure, it is welcomed for its potential to partly fill the gap in terms of AI explainability, but would only reach its full potential if accompanied by relevant ex-ante obligations.
The EPRS also assesses the European Commission’s Impact Assessment, highlighting the tension between the benefits and risks of adopting strict liability. It remains crucial to find a solution balancing such interests. Moreover, the EPRS finds that the Commission’s Impact Assessment fails to consider the environmental costs of AI development and deployment, namely water and energy consumption. Accounting for sustainability under the AILD could be paired with accounting for sustainability under the revised PLD. The author of the EPRS Complementary Impact Assessment himself had previously proposed, in a 2023 paper, an interpretation of the revised PLD where AI models presenting suboptimal sustainability parameters would qualify as defective by design.
The EPRS is not the only entity to raise concerns regarding the proposed AILD. Indeed, it has been reported by MLex that during a meeting of the Council of the EU’s technical body in charge of civil law matters (held on November 11, 2024), several Member States – such as France, Italy, and Denmark – would have reiterated their concerns and opposition to the AILD draft, including the text’s complexity for damage victims, or even the need to completely withdraw the legislative proposal. On October 25, the current Hungarian chair had circulated, in reaction to Member States’ criticism, a discussion paper which would have included significant amendments (as reported by MLex) such as lowering the bar for the rebuttable presumption mechanism.
It remains to be seen whether the above concerns, presented by the EPRS but also by Member States, would be implemented into revisions of the AILD legislative proposal. Negotiations are expected to resume now that the AI Act and the revised PLD have been approved. Indeed, the discussions on the AILD had been suspended until the adoption of the closely-related AI Act, and have not resumed yet.