1. Introduction[1]
The rapid adoption of Artificial Intelligence (AI) in Europe raises critical concerns about the effectiveness of the EU’s remedial systems. In previous analyses, I examined how technological innovation impacts access to justice in EU composite decision-making (see here), explored limits to EU actors’ accountability in border surveillance (forthcoming soon here), and assessed the remedial frameworks in EU Digital Regulations like the GDPR, AI Act, and DSA (see here). This blog post builds on that discussion by analysing the potential remedial value of the much-debated right to explanation in the EU’s AI Act. I offer two key observations: (a) the current formulation of this right fails to adequately address concerns about AI’s role in individual decision-making; (b) the limits of its formulation could be mitigated through a purposive, integrated interpretation that aligns with the AI lifecycle and existing constitutional requirements under Articles 41 and 47 of the Charter of Fundamental Rights, particularly when public authorities are involved in AI-driven decision-making.
2. “A chain is no stronger than its weakest link”
European administrative law scholarship (see here, here, or here) has long recognised the integrated nature of European administrative conduct, referring to the close interlinking of actions of the Member States’ and EU’s authorities when implementing EU policies into effect. While the focus in this scholarship has been on the gaps in effective judicial protection due to jurisdictional limits (and I am no exception to this argument, see here), AI Act holds the potential to shaken up this impasse with its set up of inherently integrated obligations between the AI developers and the AI deployers. And, sure, the GDPR also regulates the interactions between data controllers and data processors. Yet, under the GDPR the primary responsibility over the processing conduct rests with the one or more of controllers, hence even where the processors process personal data on the controllers’ behalf, the GDPR remedies in principle allow the data subjects to enforce their rights against the controller. The same does not seem to be the case under the formulation of obligations in the EU’s AI Act. And, the dislocation of responsibility between the developer and the deployer is highly problematic, given well-documented negative effects of the AI-driven processing of personal data on decision-making capacity (see here what I said on this before). Yet, the EU legislators’ formulation of the AI Act’s right to explanation in Article 86 clearly neglects this fact. Just to be on the same line here, the right reads as follows (with my emphases added):
Any affected person subject to a decision which is taken by the deployer on the basis of the output from a high-risk AI system listed in Annex III, with the exception of systems listed under point 2 thereof, and which produces legal effects or similarly significantly affects that person in a way that they consider to have an adverse impact on their health, safety or fundamental rights shall have the right to obtain from the deployer clear and meaningful explanations of the role of the AI system in the decision-making procedure and the main elements of the decision taken.
The relevant recitals shed some light on the legislators’ intents, which in my view fall rather short of giving the right the teeth it needs to bite as an effective remedy. In recital 93, it is assumed that ‘[w]hilst risks related to AI systems can result from the way such systems are designed, risks can as well stem from how such AI systems are used,’ justifying that deployers of high-risk AI system therefore ‘play a critical role in ensuring that fundamental rights are protected, complementing the obligations of the provider when developing the AI system.’ Deployers are, according to the EU legislators, ‘best placed to understand how the high-risk AI system will be used concretely and can therefore identify potential significant risks that were not foreseen in the development phase, due to a more precise knowledge of the context of use, the persons or groups of persons likely to be affected, including vulnerable groups.’
This justification is a slap to the face of the whole concept of AI lifecycle, and worse, to the decades of research into the black-box-nature of algorithmic decision-making. By omitting any mention of the AI provider in the right to explanation, the above formulation ignores the reality that AI deployers—such as immigration officers using EU databases—cannot provide a meaningful explanation of the AI’s influence on their decisions. Even if these officers are well trained and aware of the risks, they are unlikely to fully grasp how the AI impacted their choices.
The requirement to explain only the role of AI in decision-making process and the main elements of that decision renders this right nearly ineffective. For example, would it suffice for an officer to state that they performed a search in the ETIAS system, which led to a travel authorisation refusal, and that the AI output was manually verified? Most likely, yes. But this shallow explanation would end the inquiry, leaving the individual affected none the wiser regarding their potential defence strategy. Can the officer alone ever determine whether the AI output resulted from discriminatory profiling? Absolutely not—and research confirms this.
Expecting deployers to provide a meaningful explanation without involving the AI provider is both unrealistic and flawed. While deployers may understand the context of their decision-making better than the provider, the provider holds key information about the system’s inner workings, which is crucial to understanding the final decision. This is especially true in cases like ETIAS, where the provider’s data and setup play a significant role in outcomes. Without recognising the integrated nature of obligations between the deployer and the provider, the right to explanation becomes the weakest link of the AI responsibility chain.
3. A plea for an integrated right to explanation
Scholars have long debated how algorithmic decisions can be meaningfully explained, (see notably Wachter and others’ famous proposal for counterfactual explanations). Others warned of the limited usefulness of a right to explanation and even its counterproductive consequences. Edwards and Veale alerted that the existence of this right places an unreasonable burden on the users of AI systems, such as the immigration officer in our case, to provide explanations of something they themselves do not fully understand. Furthermore, the insertion of this right into the AI Act without another type of legal redress for affected individuals also confirms another of Edwards’ and Veale’ cautions – that the “hype around the right to an explanation may create the belief that there is no need for further legal solutions”.
Such legal vacuum is effectively the new reality under the adopted AI Act, with the right to lodge a complaint in Article 85 serving as an insignificant consolidation (see my views on this here). While remedies under the GDPR remain available in many cases involving AI-driven individual decision-making—since such cases often involve personal data processing—this shouldn’t justify accepting a poorly formulated new right where it’s truly needed. Instead, the EU’s human-centered approach to AI regulation calls for maximising this right’s potential in line with the AI Act’s goal of ensuring high level of protection of fundamental rights (Article 1). An isolated reading of the right to explanation and its recitals, in my view, offers a misguided or counterproductive interpretation of how the obligations stemming from this right should be applied. To address this, I propose two suggestions for an integrated application of the right to explanation in practice.
4. An integrated right to explanation in light of the AI lifecycle
An integrated right to explanation requires a realistic understanding of the chain of responsibility in AI lifecycle. AI Act emphasises (recital 73) that “[h]igh-risk AI systems should be designed and developed in such a way that natural persons can oversee their functioning, ensure that they are used as intended and that their impacts are addressed over the system’s lifecycle.” The reference to AI lifecycle is a common expression of the necessity for a circular chain of responsibility over AI design and uses in a democratic society. In other words, effective human oversight in AI-driven individual decision-making, as outlined in the AI Act, will be impossible without allowing deployers to consult AI developers. Integration of responsibility should take various forms: from requiring deployers to obtain sufficient information about the system’s operations to giving them the ability to ask developers whether their AI system’s use may have resulted in discriminatory or otherwise unlawful outcomes. This is essential to meet the demand for “clear and meaningful explanations” of the AI system’s role in decision-making, ensuring accountability and transparency.
This interpretation would also be in line with the approach adopted by the Advocate General de la Tour in the pending case C‑203/22 Dun & Bradstreet Austria. In another credit-scoring scenario, the Advocate General concluded that the concept of meaningful information under Article 15(1)(h) of the GDPR also encompasses the protections and purposes outlined in Article 22 GDPR. This means that individuals subjected to algorithmic decisions must be provided with sufficient information—about the context and logic of the automated decision-making (ADM) process—enabling them to understand how the decision was reached.
Additionally, this could require the decision-maker to seek further explanations from the AI tool’s developer, even if the information is protected by trade secrets. At the very least, the competent data protection authority should have access to this information to perform the necessary oversight, ensuring transparency and accountability in AI-driven decision-making processes. This reinforces the idea that a collaborative responsibility between AI deployers and providers is essential for effective explanation and oversight. We shall see whether the Court follows this approach as well.
5. An integrated right to explanation in light of the reasoning obligations
An integrated right to explanation in public decision-making also requires an application of this right in conjunction with the duty to give reasons under Article 41(2)(c) of the Charter (see Fink and Finck, here). Paragraph 3 of Article 86 AI Act clarifies that the right to explanation applies only where such a right is not already guaranteed by Union law. Indeed, Union law mandates that all public authority decisions affecting legal interests and rights must be sufficiently reasoned to allow individuals to defend themselves and courts to review decisions effectively. This dual function of reasoning—ensuring transparency for individuals and effective oversight by courts—also underpins the purpose of the right to explanation within the AI Act’s Remedies section. In public decision-making, particularly where high-risk AI systems listed in Annex III are involved, the right to explanation should be interpreted through this lens. It must not only clarify the AI system’s role in decisions but also fulfil the broader legal obligations for reasoning, enabling effective defense and judicial review.
Public authorities are primarily bound by a constitutional duty of care, requiring them to provide reasons for their decisions that enable individuals to assess their chances of appealing and allow courts to meaningfully address such appeals. When high-risk AI systems are used to support decisions, authorities—often also the deployers of these systems—now have an additional obligation. They must offer a meaningful explanation about the AI system’s role in decision-making and the key elements of the decision itself.
Given their duty of care, which, as Hofmann emphasises, demands decisions to be based on full, complete, and adequate information, authorities should be required to consult the AI provider. For instance, an immigration officer who identifies a hit in the ETIAS system may need to seek further information from Frontex, the provider of the ETIAS algorithm. This consultation would help clarify the algorithm’s role in indicating a travel refusal, enabling the officer to provide a complete explanation of their decision. Such transparency is crucial for allowing individuals to defend themselves effectively in court and ensuring that courts can conduct a thorough review of the decision.
6. Integrated interpretation for all …
Integrated interpretation is crucial across the AI Act, particularly in aligning the obligations between developers and deployers of AI systems. Throughout the Act’s 144-page text, similar inconsistencies appear, complicating efforts to ensure strong protection of fundamental rights. To unlock the Act’s full potential, a realistic and integrated interpretation of these obligations is essential—especially in public decision-making contexts, where authorities must respect the rule of law, as warranted by the EU’s and Member States’ requirements under Article 19(1) TEU to provide effective remedies in law and in practice. Due to limited space, I will cover this holistic suggestion on another occasion. For now, I conclude with a plea that without an integrated interpretation applied to the much-needed right to explanation briefly proposed above, individuals who find themselves subject of AI-driven decisions will not be able to enjoy a meaningful remedy.
[1] Europa Institute, Leiden Law School, Leiden University, The Netherlands, s.demkova@law.leidenuniv.nl.