EU’s fight against disinformation – Who owns the truth?

0
  1. Introduction

In 2024, elections will take place in over 60 countries, making it a “year of elections.” Voters will likely seek information on political parties online in order to make informed decisions. But, where to look for the desired information? In recent decades, the internet has become an increasingly important source of information, both for forming opinions and for participating in public discourse. However, the vast amount of information available can be overwhelming, requiring organized sorting to help users find relevant content.

While this approach has many merits, it has also led to several challenges. Online platforms use algorithms to manage content visibility. These algorithms often display similar content repeatedly, potentially hiding other important information. The ease of publishing online raises concerns about the accuracy and diversity of available content.

Since online platforms rely on advertising revenue, they prioritize content that maximizes user engagement over diversity. This can be problematic during elections, as biased content and misinformation may influence voter opinions.[1] To address disinformation, the EU Commission has taken steps to combat this growing issue.

2. Measures adopted

These measures can be broadly categorized as either codes or soft law measures.

In 2017, Germany introduced the Network Enforcement Act (Netzwerkdurchsetzungsgesetz – NetzDG). This law contained several compliance rules for online platforms. Among other things, it was intended to help identify and remove criminal fake news more quickly. The company must set up a complaints office where users can report actual or suspected illegal content, such as fake news or hate speech. This content must then be reviewed within a short period of time. If necessary, it must be removed. Platforms were required to report on a regular basis on how complaints were handled and dealt with. It is important to note that breaches of obligations could have resulted in draconian penalties. The employee responsible for handling complaints could be fined up to €5 million, and the company itself could be fined up to €50 million if proven to have violated the law.[2]  It has been partially repealed on 13.05.2024.

Subsequently, the EU adopted laws, such as the Digital Services Act (DSA) as well as the Digital Markets Act which together form a uniform set of rules throughout the EU.

Like the NetzDG, the DSA also sets out rules[3] for implementing complaints procedures when users wish to take action against illegal content. Decisions taken by online platforms to remove, retain or block content can be appealed. In the future, providers will be obliged to explain their reasoning in an open and comprehensible manner, and users will be able to review this decision-making process through legal remedies. EU Member States will be required to designate competent authorities for the enforcement of the law, which will be able to take steps against breaches of the law by providers and impose heavy fines.

The initial impetus for the soft law measures came from a request by EU a group of experts, which it asked to propose measures to combat fake news. This group of experts recommended the adoption of a controlled self-regulatory approach. The EU Commission then asked social networks to identify disinformation and bots and label them accordingly. Although Google, Facebook, X (Twitter) and Mozilla committed to providing clear policies and rules to improve critical thinking and media literacy in the “Code of Conduct on Disinformation”,[4] this was considered insufficient.  As a result, the EU has published several action plans and guidelines for consideration. But that was not all.

In addition, the European Commission has issued a Communication on Fighting Disinformation on the Internet, which outlines the measures being taken.[5]

The EU also introduced the Central European Digital Media Observatory, where fact-checkers and academics work together to analyze disinformation. They also help authorities limit the spread and impact of disinformation.[6]

Another measure used by the EU to combat the proliferation of disinformation is the European External Action Service’s so-called East StratCom Task Force is also responsible for fighting disinformation. The Task Force’s flagship project is called EUvsDisinfo.[7] It was launched in 2015 to better predict, detect and respond to Russia’s ongoing disinformation campaigns affecting the EU, its member states and their common neighbors. The main goal is to raise public awareness of the Kremlin’s disinformation operations so that they are more recognizable, and to help everyone become more resilient to the manipulation of digital information and media. They come from diverse backgrounds, including journalism, communications, social sciences, and even Russian studies.

These are just some of the institutions set up by the EU.  While each institution has a distinct mandate, they basically perform the same tasks. In the meantime, even private start-ups have been established that appear to offer little more than the institutions created by the EU.[8]

 

  • What might be the most prudent conclusion to draw from this analysis?

The EU, as well as Member States[9], are constantly fighting disinformation. However, there are reasons for concern.

Firstly, there are numerous institutions that are ostensibly at the vanguard of the battle against disinformation. It is challenging to ascertain the precise number of such institutions, given the plethora of entities that purport to be engaged in this endeavor.

Moreover, there is a dearth of transparency, a quality that the EU likes to demand from online platforms. Who is on the teams tasked with detecting, verifying and correcting disinformation? What exactly qualifies these people to make this decision? The relevant websites are rather silent on that matter.

Furthermore, it is worth noting that these institutions are financed and supported by the EU. How interested will they be in keeping unpopular, yet legally unobjectionable, content online?

There is a great deal of concern about what is commonly referred to as “fact-checking.” Such assessments should be conducted by independent, certified, and peer-reviewed institutions. The International Fact Checking Network (ICFN) is a global association of fact-checkers that has created a Code of Principles. According to this protocol, all signatories are subject to review. However, this “review” is merely a checklist, requiring the fact checker to make corrections that they themselves consider necessary. It is irrelevant whether they rectified their own errors following the receipt of error reports from others. This is no peer review.[10] The most significant fact-checking organization in Germany, Correctiv, also receives financial support from the federal and state treasuries of North Rhine-Westphalia.[11] It is difficult to ascertain the veracity of such organizations.

An even bigger problem, however, is the legislation that has been enacted to combat disinformation. The German NetzDG, which somehow served as a role model for the Digital Services Act, was criticized by numerous experts as being unconstitutional and contrary to European law. With this law, the German government has, in essence, outsourced the censorship of unpopular opinions to private companies. The power over freedom of expression thus lies with private, non-democratically legitimized corporations.

It had become challenging for users to have their legally protected posts reinstated by the platforms. The lawsuits that have been filed by users in Germany in the last few years have shown that platforms typically require affected users to pursue legal action, despite clear court rulings.

Moreover, this route is not only costly but also carries the risk of courts being unaware of the issue or unwilling to address it. Joachim Nikolaus Steinhöfel[12] has successfully conducted many such cases before German courts. In his recently published book, he reports on several cases in which obscure decisions were made. In addition, various judgments have been handed down throughout Germany with no uniform approach in addressing the legal issues. Whether users succeed in enforcing their legitimate claims also depends on where they are located. It seems reasonable to assume that the results of the proceedings conducted in connection with the NetzDG may provide an outlook on what the situation will look like with regard to the Digital Services Act.

The Digital Services Act, which encompasses a scope that extends beyond that of the NetzDG, incorporates terms such as “disinformation,” the precise definition of which is not explicitly delineated within the regulation. This implies that the EU is at liberty to define this term at its discretion. As there is no precise, generally binding term that those affected can use as a guide, it will be challenging for them to take action against the deletion of content. The regulation that permits a European country to enforce a Europe-wide deletion if a posting is illegal in that country alone can also be seen as an attack on freedom of expression.

Finally, Article 36 of the DSA as well gives rise to significant concern regarding the free exchange of opinions. Even during the period of the global pandemic, online platforms such as YouTube deleted content at the behest of the World Health Organization (WHO) if it did not comply with the organization’s requirements.[13] Article 36 Article 36 contains a “crisis response mechanism” through which bodies appointed by the EU Commission can act against platforms in case of a “crisis.” In the event of uncertainty, this encompasses the complete cessation of operations for the platform in question if it fails to act expeditiously against “illegal content.” However, it is simply impossible to determine with certainty when content is – in fact – illegal. Furthermore, the EU has the discretion to determine whether a “crisis situation” exists. Given the fact that the population of the EU has been kept in a state of crisis for several years, the EU Commission has thus acquired a virtually uninterrupted power of intervention. The mere declaration of an “alleged” crisis is sufficient to justify the gratuitous classification of content as potentially dangerous.[14]

Fearing potential stiff penalties if they leave illegal content online, platforms often take the viable strategy: the deletion of any content that could be construed as problematic. This results in the removal of a significant amount of legitimate content, effectively silencing users.

It appears that what was intended to be the “fight against disinformation” has now become, in essence, a “fight against disliked statements.” The laws, as well as the so-called soft law means, will result, or have already resulted in the silencing of political opponents or simply dissenters.

The Internet once offered a unique opportunity for many people to express themselves on issues, raise their voices, or simply obtain a wide range of information from a variety of sources. The EU has turned the advantage of the Internet into its opposite through the enactment of numerous rules, despite its professed desire to promote diversity of opinion.

There is still the remaining question of who in an open society should legitimately decide what is wrong and what is true.

That should be neither governments nor private companies.

[1] For all see: Miriam C. Buiten, Combating disinformation and ensuring diversity on online platforms: Goals and limits of EU platform regulation, January 2022, esp. 2, http://www.ssrn.com/abstract= 4009079

[2] See Bundesministerium für Justiz und Verbraucherschutz (Federal Ministry for Justice and Consumer protection), Network Enforcement Act Regulatory Fining Guidelines, 2018/9-11, 19

https://www.bundesjustizamt.de/SharedDocs/Downloads/DE/NetzDG/Leitlinien_Geldbussen_en.pdf?__blob=publicationFile&v=3

[3] see https://eur-lex.europa.eu/eli/reg/2022/2065/oj

[4] EU Commission, Shaping the Europe’s digital future, https://digital-strategy.ec.europa.eu/en/library/2018-code-practice-disinformation

[5] EU Commission, Shaping the Europe‘s digital future, Tackling online disinformation, https://digital-strategy.ec.europa.eu/en/policies/online-disinformation

[6] https://cedmohub.eu/

[7] https://euvsdisinfo.eu/

[8] See e.g.: Facts for Friends, https://www.deutschland.de/de/topic/kultur/faktencheck-in-sozialen-medien-startup-facts-for-friends

[9] Such as Germany, see https://www.bmi.bund.de/SharedDocs/schwerpunkte/EN/disinformation/measures-taken-by-the-federal-government.html

[10] With regard to this topic, in particular: Joachim Nikolaus Steinhöfel, Die digitale Bevormundung, München 2024, chapter 4

[11] https://correctiv.org/ueber-uns/finanzen/

[12] Lawyer in Hamburg/Germany, who was the first in Germany to sue online platforms and won. He is regarded – also internationally – as a pioneer in the fight for freedom of expression. For information in English language see https://www.bbc.com/news/blogs-trending-41042266

[13] Steinhöfel, p. 216

[14] Steinhöfel, p. 217

Share this article!
Share.

About Author

Leave A Reply