- Introduction: Relevant provisions of AI Act
Under Reg. (EU) 2024/1689 (AI act), systemic risks are described as risks that have a significant impact on the Union market due to their reach, or due to actual or reasonably foreseeable negative effects on public health, safety, public security, fundamental rights, or the society as a whole, that can be propagated at scale across the value chain[1]. A model which has high impact capabilities[2] will be classified as possessing systemic risk [3]. The Commission can also designate models as having systemic risk[4].
Article 55[5] of the AI Act requires providers of models with systemic risk to evaluate these models in accordance with standardised protocols and tools reflecting the state of the art, “including conducting and documenting adversarial testing of the model with a view to identifying and mitigating systemic risks”[6]. Providers are then required to assess and mitigate possible systemic risks at the Union level[7]. They must report relevant information about serious incidents and possible corrective measures to address these[8]. Finally, they are required to ensure an adequate level of cybersecurity protection for these models[9]. To meet these obligations, providers will rely on the AI Code of Practice (COP) – when the text is finalised [10].
The finalised text will cover how a provider should identify systemic risks at the Union level, including identification of the type and nature of the systemic risk and their sources[11]. The finalised Code of Practice must also detail the “measures, procedures and modalities for the assessment and management of the systemic risks at Union level”[12].
- AI COP (2nd Draft): Systemic Risk provisions
Under AI COP (2nd Draft[13]), signatories commit to the identification of systemic risks[14]. They further commit to undertaking a rigorous analysis of the systemic risks identified[15]. Signatories must then commit to comparing the results of risk analysis to pre-defined risk tiers in order to assess the level of risk[16].
As part of the provider’s commitment to risk identification, they commit to including selected systemic risks. The AI COP (2nd Draft) details these selected systemic risks that must be considered by providers. These include cyber offences, chemical, biological, radiological and nuclear risks, large-scale harmful manipulation, large-scale illegal discrimination, loss of human oversight[17]. Signatories commit to treating these as systemic risks[18]. Signatories must also commit to considering other risks when identifying additional systemic risks[19]. These other risks include those related to infrastructure and system reliability, risks related to fundamental rights (for example privacy infringement, surveillance, generation of harmful content, including child abuse) and any other risks with large-scale negative effects on fundamental rights, public health, democratic processes, public security, economic stability, environment and non-human welfare or human agency[20]. The draft describes sources of risk as being elements (for example, events, components, actors and their intentions or activities) that alone or in combination give rise to risks (for example, model theft or widespread cyber vulnerabilities). It goes on to identify the different sources that providers must commit to considering in their risk analysis[21]. These include the model capabilities[22], the model propensities[23] (for example the tendency to “deceive”, discriminatory bias, tendency to “hallucinate”) and the model affordances and deployment context[24] (for example lack of explainability or transparency, interactions with other AI models or systems, model inadequacies or potential failures).
- Key considerations:
The topic of systemic risks is one of the most consequential parts of the Code of Practice. While the section as it stands poses many interesting questions, this piece focuses on the issue ambiguity and a significant margin for subjective assessments on the part of the provider, within the systemic risk section. Two examples are provided to illustrate this point.
- Example 1: Large scale harmful manipulation:
As described above, signatories must treat cyber offences, chemical, biological, radiological and nuclear risks, large-scale harmful manipulation, large-scale illegal discrimination, and loss of human oversight as systemic risks. Consider large-scale harmful manipulation. This is described as the “facilitation of large-scale manipulation with risks to fundamental rights or democratic values, specific to high-impact capabilities of models such as autonomy, persuasion and manipulation”[25]. AI COP (2nd Draft) continues, “This includes, for example, large-scale election interference, and coordinated and sophisticated manipulation campaigns leading to harmful distortions of public discourse, knowledge and behaviour.”
The recent election issues in Romania show the difficulties that can arise in assessing whether election manipulation has taken place. In Romania, pro-Russian campaigns are accused of using TikTok to influence elections. In response, the election was annulled, which provoked backlash and rocked Romanian democracy to its core. These instances are becoming more common and the ability for public and private actors to counter these threats is getting muddied in an increasingly difficult political environment.
The real world application of this provision on large-scale election interference will be difficult. There is significant margin left to the provider to consider whether the bar of election manipulation has been reached. The avenue of considering this critical issue through a code of practice is risky.
- Example 2: Risk to fundamental rights:
As part of AI COP (2nd Draft), signatories must commit to considering other risks when identifying additional systemic risks. These other risks include risks related to fundamental rights (for example privacy infringement, surveillance, generation of harmful content including child abuse) and any other risks with large scale negative effects on fundamental rights, public health, democratic processes, public security, economic stability, environment and non-human welfare or human agency[26]. This differs from cyber offences or large-scale harmful manipulation discussed above, which providers must commit to treating as systemic risks. It is not clear what the impact of the distinction between “commit to treating” and “commit to considering” will have. However, making the distinction appears to be at odds with the definition of systemic risk and the place of fundamental rights in the AI Act as a whole.
Returning to AI COP (2nd Draft), signatories must commit to considering risks related to fundamental rights, including privacy infringement, surveillance, and generation of harmful content, including child abuse. These examples of risks related to fundamental rights are useful. Signatories must also commit to assessing “any other risks with large scale negative effects on fundamental rights, public health, democratic processes, public security, economic stability, environment and non-human welfare or human agency.” Is it right to have a provider assessing whether something is a risk to public security, fundamental rights, or the society as a whole? This statement creates significant uncertainty and leaves significant room for provider assessment. Furthermore, AI COP (2nd Draft) does not provide any guidance on what this obligation consists of, and in doing so hinders providers.
It may be that the distinction between “commit to treating” and “commit to considering” is precisely because of the sometimes nebulous nature of concepts like fundamental rights, public health, democratic processes, public security, economic stability, environment and non-human welfare or human agency. But is the answer to this to somewhat lessen their role when undertaking systemic risk assessments?
While no legal instrument is without some degree of uncertainty, and sometimes ambiguity in legislation can serve a useful purpose, the provisions on systemic risk raise serious questions. The relegation of the substantive guidance around systemic risk to a code of practice, and the ambiguity and significant margin for subjective assessment that this affords the provider is risky. The current wording in AI COP (2nd Draft) does not mitigate that risk. It also places a significant burden on the provider to make these assessments without enough practical guidance. The AI Act left these considerations to a code of practice and, in doing so, placed these protections in a vulnerable state.
- Third draft and beyond:
The Third Draft General-Purpose AI Code of Practice is due to be published between February and March 2025. It will be created in a much wider geopolitical context. Earlier this month, JD Vance specifically called out pieces of EU legislation as being too onerous on American companies. In a comment to Mlex, an independent expert working on the next draft of the AI Code of Practice was expecting mounting political pressure “to dilute or even eliminate AI Act”. Time will tell whether the key elements of AI COP (2nd Draft) survive.
The draft will continue to develop over the course of the next few months. The final version of the Code of Practice is scheduled presented in a Closing Plenary in May 2025 and published.[27] Once drafting is completed, it will be accessed by the AI Office and AI Board. The Commission can then approve the Codes of Practice with an implementing Act.[28]
[1] Art 3 (65), Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonised rules on artificial intelligence.
[2] A model is presumed to have high impact capabilities where “the cumulative amount of computation used for its training measured in floating points operations is greater than 10”.
[3] Art 51 (a) Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonised rules on artificial intelligence.
[4] Ibid Art 51 (b).
[5] Ibid Art 55 .
[6] Ibid Art 55 (a).
[7] Ibid Art 55 (b).
[8] Ibid Art 55 (c).
[9] Ibid Art 55 (d).
[10] Ibid Art 53 (4).
[11] Ibid Art 56 (c).
[12] Ibid Art 45 (d).
[13] Second Draft General- Purpose AI Code of Practice, December 2024, accessible at this link: https://digital-strategy.ec.europa.eu/en/library/second-draft-general-purpose-ai-code-practice-published-written-independent-experts .
[14] Ibid Commitment 7, Section 3.
[15] Ibid Commitment 8, Section 3.
[16] Ibid Commitment 9, Section 3.
[17] Ibid Section 3.2.
[18] Ibid Section 3.2.
[19] Ibid Section 3.3.
[20] Ibid Section 3.3.
[21] Ibid Section 3.4.
[22] Ibid Section 3.4.1.
[23] Ibid Section 3.4.2.
[24] Ibid Section 3.4. 3.
[25] Ibid Section 3.2.
[26] Ibid Section 3.3.
[27] Predicted date of publication of final version- correct as of 23/02/25
https://digital-strategy.ec.europa.eu/en/policies/ai-code-practice#timeline
[28] More information on the second draft and a link to download the draft is available here.