Assessing the Impact of Artificial Intelligence Systems on Fundamental Rights

0

Table of contents: Executive summary. – 1. Introduction. – 2. The European Constitutional Framework and Requirements for FRIA; 2.1 The FRIA in the AI Act; 2.2 GenAI, FRIA and Risk Assessment. – 3. From Theory to Practice: A Proposal for Implementing the FRIA Framework; 3.1 The Questionnaire; 3.2 The Matrix; 3.3 FRIAct: Integration of Questionnaire and Matrix; 3.4 FRIAct Lifecycle. – 4. LoanLens: a testing ground use case; 4.1 Overview of the AI systems; 4.2 The application of the FRIAct. – 5. Conclusion.

Executive summary

This paper introduces a structured and comprehensive framework for conducting a Fundamental Rights Impact Assessment (FRIA), tailored specifically to address the challenges posed by high-risk Artificial Intelligence (AI) systems, including Generative AI (GenAI). The study is grounded in the obligations set forth by Regulation (EU) 2024/1689, known as the Artificial Intelligence Act (AI Act), which emphasises the necessity of assessing and mitigating risks to fundamental rights. The FRIA bridges the gap between regulatory obligations and practical implementation by providing a structured approach to assess and mitigate risks to fundamental rights.

This paper situates the FRIA as a critical tool mandated by the AI Act for certain high-risk AI systems and proposes a framework to conduct this assessment named FRIAct (Fundamental Rights Impact Assessment AI Act). The analysis proposed in this work positions the theoretical commitments to fundamental rights protection and their practical operationalisation, providing deployers of AI systems with a replicable approach that aligns with European constitutional values and legal requirements. This approach also applies to GenAI systems, which often involve complex and large-scale implications for individuals and society.

The first part of the paper situates the FRIA within the broader European legal and constitutional framework, emphasising the foundational role of the Charter of Fundamental Rights of the European Union (CFREU). It explores how fundamental rights enshrined in the CFREU shape the obligations of the AI Act. Notably, the Regulation underscores the importance of ensuring that high-risk AI systems, including GenAI, comply with European constitutional values. As a matter of fact, GenAI systems have emerged as transformative technologies capable of reshaping industries, yet they also pose significant challenges. These include risks to privacy, data protection, access to effective remedies, and human dignity, as well as systemic concerns such as misinformation and the erosion of trust in automated systems. The AI Act explicitly extends its regulatory scope to include GenAI, introducing specific safeguards for models with systemic risks. These provisions address the need for enhanced transparency, robust oversight, and proactive risk mitigation in the design, development, and deployment of GenAI systems.

The second part introduces the FRIAct based on a two-pronged approach integrating qualitative and quantitative tools for assessing risks to fundamental rights. The qualitative tool, referred to as the Questionnaire, is designed to gather contextual and operational insights of the AI system, including its purpose, affected populations, technical characteristics, and the broader societal and ethical implications of its deployment. It evaluates risks producing a Questionnaire Risk Indicator (QRI) that serves as the foundation for further analysis. The Matrix, on the other hand, adds an assessment specifically designed to produce a quantitative output: this purpose is achieved by systematically mapping potential qualitative impacts on specific rights and then attributing a quantitative score to those impacts – this is why we also refer to the Matrix’ outcomes as semi-quantitative ones. It evaluates risks based on two key dimensions – Severity and Probability of Occurrence – and calculates Impact Significance (IS) scores for each right as outlined in the CFREU. The FRIAct incorporates both the Questionnaire and the Matrix to generate FRIAct Scores, a final output that quantifies the system’s overall risk to each fundamental right. Both Questionnaire and Matrix are relevant in the phase of pre-deployment and monitoring. This design ensures flexibility, enabling its application across diverse AI systems and use cases while maintaining alignment with the AI Act’s regulatory requirements.

The final part of the paper applies the FRIAct framework to a practical use case: LoanLens, a high-risk AI system designed for credit scoring of natural persons. LoanLens integrates a traditional Machine Learning System (MLS) with a Generative AI-powered Decision Support System (DSS), creating a hybrid approach that combines structured and unstructured data processing. This section demonstrates how the FRIAct evaluates the system’s risks, including transparency, fairness, privacy, and human oversight during the decision-making process. The Questionnaire identifies the system’s context, purpose, and operational risks, while the Matrix quantifies its impact on specific fundamental rights, such as privacy, non-discrimination, and human dignity. The case study underscores the importance of robust human oversight, as mandated by Article 14 of the AI Act, and the need for continuous monitoring to address evolving risks throughout the system’s lifecycle.

The paper concludes by arguing that the FRIAct framework not only fulfils the compliance requirements of the AI Act but also sets a benchmark for ethical and accountable AI deployment. The FRIAct framework represents a critical step toward embedding fundamental rights at the core of AI. The proposed approach highlights the necessity of collaboration between regulators, AI providers, and deployers to ensure that AI systems not only comply with legal standards but also uphold the societal values enshrined in the CFREU. By providing deployers with a structured approach to assess and manage risks, this framework operationalises fundamental rights protection in a way that is practical, replicable, and adaptable to diverse AI systems.

To read the paper:

Assessing the Impact of Artificial Intelligence Systems on Fundamental Rights

Annexes:

1. Questionnaire – model

2) Matrix – model

3) Guidelines for the compilation of the matrix

4) Questionnaire – completed

5) Matrix – completed

 

 

Share this article!
Share.

About Author

Leave A Reply