The Rise of Automated Decision-Making and Its Legal Framework

0

In recent years, automated decision-making (ADM) has become increasingly common,[1] impacting various sectors such as finance, justice, and social services. From the credit industry, where ADM is used to determine loan eligibility, to public administration, which uses it to assess asylum requests,[2] ADM systems offer undeniable benefits in efficiency, speed, and scalability. However, the growing dependence on these systems has raised concerns about transparency, fairness, and the protection of fundamental rights.[3]

At the centre of these concerns is Article 22 of the GDPR, which seeks to protect individuals from the risks associated with fully automated decision-making processes. It establishes that individuals should not be subject to a decision solely based on automated processing, including profiling unless certain conditions are met. These conditions include situations where the decision is necessary for a contract, is authorized by law, or is based on the individual’s explicit consent.[4]

The GDPR mandates transparency in ADM processes, requiring data controllers to disclose the logic behind the algorithms, as well as the importance and consequences of such processing. Articles 13, 14, and 15 reinforce this transparency by obligating companies to provide meaningful information to individuals about the data being processed and the underlying decision-making criteria. This regulatory framework is designed to mitigate the potential risks posed by ADM and ensure that automated processes do not infringe upon individuals’ rights, particularly when it comes to decisions that can significantly impact their lives.

  1. The Legal Implications of Automated Decision-Making in the SCHUFA Case

Automated decision-making (ADM) systems, defined under the GDPR, are processes where decisions are made solely through automated systems without significant human intervention. These decisions often have critical implications for individuals, especially when they involve personal data processing. Specifically, in the case C-634/21, also known as the SCHUFA case, this concept of ADM was at the heart of the legal dispute, specifically in determining whether the credit scoring process employed by SCHUFA constituted a fully automated decision under Article 22 of the GDPR.

  • Credit Scoring as an ADM

The SCHUFA case arose when a German citizen applied for a loan, only to have their request denied based on an automated credit scoring system managed by SCHUFA. The credit score assigned to the applicant was calculated using personal data, resulting in a negative rating that influenced the bank’s decision to reject the loan. This denial prompted the applicant to seek access to the data and logic used in the decision-making process, raising critical questions about the transparency of ADM systems.

In credit scoring, algorithms process large volumes of data related to an individual’s financial history, behaviour, and other factors to produce a numerical rating that reflects their creditworthiness. The SCHUFA system, which used such algorithms, was central to the decision of whether or not to grant a loan. The central issue in this case was whether SCHUFA’s involvement constituted an automated decision as described by Article 22 of the GDPR.

  • Article 22 GDPR and its Broader Scope

Article 22 establishes that individuals have the right not to be subjected to decisions based solely on automated processing, including profiling, if these decisions produce legal effects or significantly impact the person. The goal is to protect individuals from fully automated processes that may lack transparency, fairness, and accountability, especially when the outcome of such decisions can have far-reaching effects on their lives, such as financial credit scores affecting loan approvals.

The Court of Justice of the European Union (CJEU), when reviewing the SCHUFA case, affirmed that credit scoring, particularly in this context, could indeed qualify as an automated decision under Article 22. However, the complexity arose from SCHUFA’s claim that their system merely calculated a score, while the final decision was made by the bank. The question was whether this type of indirect automation, especially where human operators are involved in part of the decision-making, could still fall under the scope of Article 22’s provisions.

The CJEU made it clear that a decision qualifies as automated if it relies significantly on automated data processing, even if a human operator performs the final step. For the purposes of GDPR compliance, the human intervention must be “meaningful.” In this case, the court found that SCHUFA’s credit scoring had a decisive impact on the bank’s decision, making it part of the broader automated decision-making process. Thus, this ruling expanded the interpretation of Article 22 to cover cases where the automated decision forms the basis of a larger, human-influenced decision-making process.[5]

Henceforth, the SCHUFA ruling set an important precedent in defining the boundaries of ADM under the GDPR. First, it reinforced the idea that ADM systems, especially when used for credit scoring, cannot operate without meaningful transparency and human oversight. In cases where automated systems play a crucial role in decisions that affect individuals, those individuals have the right to know how those decisions are made and to challenge them if necessary. Furthermore, this decision underscored the importance of meaningful human intervention.[6] Simply having a human rubber-stamp a decision made by an automated system does not satisfy the requirements of Article 22. For human involvement to count as a true safeguard against the risks of ADM, it must influence the final decision in a substantive way, ensuring that individuals are not subjected to fully automated decision-making without recourse (§41-50).

Despite the CJEU’s ruling, challenges remain in fully defining and enforcing the limits of automated decision-making. The issue lies in distinguishing between fully automated processes and those where human involvement exists but may be too minimal to provide real oversight. In complex decision-making processes, particularly those that span multiple stages, determining whether a decision is truly automated or simply aided by automation becomes difficult. This lack of clarity leaves room for companies to exploit grey areas in the law. For example, businesses could argue that their systems are not fully automated because humans are involved at certain stages, even if those stages do not meaningfully alter the outcome of the process. The SCHUFA case illustrates the need for ongoing regulatory vigilance to ensure that the safeguards enshrined in Article 22 are fully realized in practice.

  1. The Balance Between Transparency and Commercial Secrecy in ADM

While the SCHUFA case primarily revolved around the rights of individuals to access information about automated decision-making processes, it also touched on a secondary but equally important issue: the balance between the transparency required by the GDPR and the need to protect commercial secrets.

One of the core tenets of the GDPR is the right of individuals to understand how their data is used, particularly when that data is processed by automated systems to make decisions that affect them. In Article 15 of the GDPR, individuals are granted the right to access the data that has been processed about them, as well as the “logic involved” in any automated decision-making. This right is crucial for ensuring that individuals can understand and, if necessary, challenge decisions that significantly impact their lives.

In the context of ADM, transparency involves providing clear and meaningful information about how algorithms function, how they process data, and how they reach decisions. This transparency is vital for ensuring fairness and accountability, particularly when individuals are subjected to decisions based on complex and opaque systems like credit scoring algorithms.

However, the right to transparency is not absolute. Companies that develop ADM systems, particularly those involving proprietary algorithms, often argue that revealing the inner workings of these systems would compromise their intellectual property and commercial secrets. In the SCHUFA case, the company claimed that its algorithm for calculating credit scores was a commercial secret and that revealing the details of the algorithm would harm its business. This raises a significant legal and ethical dilemma: how can the need for transparency in ADM be reconciled with the protection of commercial secrets? The GDPR recognizes this tension and allows for certain limitations on the right to access information if disclosing that information would infringe upon the rights and freedoms of others, including intellectual property rights. In this respect, Recital 63 of the GDPR explicitly states that the right to access should not adversely affect trade secrets or intellectual property.

  • Striking a Balance: The Court’s View

The CJEU did not directly address the issue of commercial secrecy in its ruling on the SCHUFA case but Advocate General Pikamäe’s opinion provided useful guidance (§54-56). He acknowledged that while commercial secrecy is a legitimate concern, it cannot be used as a blanket justification for refusing to provide information. Companies must still provide individuals with enough information to understand the key aspects of the decision-making process, even if that means disclosing certain proprietary elements in a way that does not compromise the secrecy of the full algorithm.

Pikamäe’s opinion suggests that companies should focus on providing “sufficiently detailed” explanations of how ADM systems function, rather than disclosing every technical detail of the algorithm. This means explaining the logic behind the system, the criteria used to make decisions, and the factors that could influence the outcome, all without necessarily revealing the full code or proprietary methods. The court’s silence on this specific issue leaves some ambiguity, but Pikamäe’s balanced approach offers a way forward. By requiring companies to disclose enough information for individuals to exercise their rights without forcing them to reveal their trade secrets entirely, the GDPR can strike a balance between the competing interests of transparency and commercial secrecy.

  • The Path Forward: Navigating Conflicts Between Transparency and Secrecy

The tension between transparency and commercial secrecy in ADM is unlikely to go away, particularly as AI and machine learning systems become more sophisticated and pervasive in decision-making processes. To navigate this tension, regulators and courts will need to develop clearer guidelines on what information must be disclosed under the GDPR and what can remain protected as a trade secret.

In particular, the issue of how much information about an algorithm’s logic must be disclosed remains an open question.[7] As ADM systems become more complex, ensuring that individuals have meaningful access to information without infringing on commercial interests will require ongoing legal refinement. At the same time, businesses must recognize that transparency is not just a legal obligation but also an ethical one. Providing individuals with the information they need to understand and challenge ADM decisions is essential for maintaining trust in automated systems and ensuring that these systems are used responsibly.

  1. Conclusion: The Legacy of the SCHUFA Case

The SCHUFA case illustrates the complexities of regulating automated decision-making in the digital age. As ADM systems become more prevalent, the law must continue to evolve to protect individuals’ rights while balancing the legitimate interests of businesses in protecting their intellectual property. The CJEU’s ruling expands the scope of Article 22, ensuring that individuals are not subjected to automated decisions without meaningful human intervention. At the same time, the case highlights the need for greater clarity on how to balance transparency with commercial secrecy, a challenge that will continue to shape the future of ADM governance.

The GDPR provides a strong foundation, but as the Artificial Intelligence Act (AI Act) comes into play, it will be essential to ensure that these two regulations work in tandem. The AI Act introduces more stringent obligations for high-risk AI systems, including credit scoring systems, requiring companies to conduct impact assessments on fundamental rights and ensure human oversight. This regulation aims to address some of the gaps in the GDPR, particularly concerning the accountability and transparency of AI systems. In conclusion, the SCHUFA case highlights the importance of maintaining transparency and accountability in automated decision-making processes. As automated systems become more prevalent, the law must evolve to protect individuals’ rights without stifling innovation. Finding the right balance between transparency, privacy, and commercial interests will be crucial in shaping the future of ADM governance.

[1] Teresa Rodríguez de las Heras Ballell, “Guiding Principles for Automated  Decision-Making in the EU” ELI Innovation Paper (2022): 28.

[2] Francesca Palmiotto, “When Is a Decision Automated? A Taxonomy for a Fundamental Rights Analysis,” German Law Journal 25, no. 2 (March 2024): 210–36, https://doi.org/10.1017/glj.2023.112.

[3] Simona Demková, Automated Decision-Making and Effective Remedies : The New Dynamics in the Protection of EU Fundamental Rights in the Area of Freedom, Security and Justice, Elgar Studies in European Law and Policy (Cheltenham, UK: Edward Elgar Publishing, 2023), https://search.ebscohost.com/login.aspx?direct=true&AuthType=ip,sso&db=nlebk&AN=3668800&site=ehost-live&scope=site&custid=s2944713.

[4] Lee A Bygrave, “Article 22 Automated Individual Decision-Making, Including Profiling,” in The EU General Data Protection Regulation (GDPR): A Commentary, ed. Christopher Kuner et al. (Oxford University Press, 2020), 0, https://doi.org/10.1093/oso/9780198826491.003.0055.

[5] Gianluca Fasano, “L’interpretazione estensiva della nozione di ‘decisione automatizzata’ ad opera della Corte di giustizia: una prospettiva più ampia ma ancora fragili tutele per le libertà fondamentali,” Rivista italiana di informatica e diritto 6, no. 2 (August 27, 2024): 15–15, https://doi.org/10.32091/0169.

[6] Asymina Aza, “Scores as Decisions? Article 22 GDPR and the Judgment of the CJEU in SCHUFA Holding (Scoring) in the Labour Context,” Industrial Law Journal, August 29, 2024, dwae035, https://doi.org/10.1093/indlaw/dwae035.

[7] Reuben Binns and Michael Veale, “Is That Your Final Decision? Multi-Stage Profiling, Selective Effects, and Article 22 of the GDPR,” International Data Privacy Law 11, no. 4 (November 1, 2021): 319–32, https://doi.org/10.1093/idpl/ipab020.

Share this article!
Share.

About Author

Leave A Reply