The role of the Venice Commission in democracy oversight through the Internet

In recent years the Venice Commission has analyzed several national rules aimed at limiting Internet’s diffusion because of its specificity vis-à-vis the already known media. The question was thus at stake of whether and of to what extent Internet could be assimilated to the latter. The Commission’s answer was regularly positive, with the effect of denying the states’ attempts of limiting freedom of expression on the web. On the other hand, the Commission released Opinion no. 974/2019, which contains a set of principles aimed at pointing out the risks no less than the opportunities which the enormous diffusion of the digital technologies is likely to bring about on the ground of democracy and of freedom of expression. The Author seeks to demonstrate that, contrary to what might prima facie appear, the latter Opinion’s approach does not counter, but rather complements, that formulated in the opinions denying the states’ attempts of limiting freedom of expression on the web.

 

Summary: 1. Premise. – 2. Venice Commission’s Opinions regarding single national rules aimed at limiting Internet’s diffusion. – 3. The Venice Commission’s “Principles for a Fundamental Rights-Complaint Use of Digital Technologies in Electoral Processes”. – 4. A complementary approach.

 

  1. Premise

The Venice Commission was repeatedly asked about how the digital system is affecting freedom of expression and the translation of the people’s will into votes and representation. While most cases concerned single aspects of the issue as established in the regulation of a single country, Opinion no. 974/2019 contains instead a set of principles on the whole issue. Given these elements, I will first give a brief account of the Commission’s analysis of the impact of single rules regarding the web on the principles of freedom of expression and of democracy, and will then focus on the constitutional implications of the general principles expressed in the above mentioned Opinion.

 

  1. Venice Commission’s Opinions regarding single national rules aimed at limiting Internet’s diffusion

In recent years the Venice Commission analyzed several national rules aimed at limiting Internet’s diffusion because of its specificity vis-à-vis the already known media. Accordingly, the question was at stake of whether and of to what extent Internet could be assimilated to the latter. The Commission’s answer was regularly positive, with the effect of denying the states’ attempts of limiting freedom of expression on the web.

In particular, Opinion No. 5651/2016 on a Turkey’s regulation of publications on the Internet and combating crimes committed by means of such publication (“the Internet Law”), reminded «that according to the Declaration of the Committee of Ministers of the Council of Europe on Freedom of Communication on the Internet, “[m]ember states should not impose on service providers a general obligation to monitor content on the Internet to which they give access, that they transmit or store, nor that of actively seeking facts or circumstances indicating illegal activity”. In case the regulations mentioned in these provisions impose such a general monitoring obligation, this would not be a proportionate burden on the public use providers in the light of the standards set forth in the above-mentioned Declaration of the Committee of Ministers» CDL-AD(2016)011, §97.

In the same Opinion, the Venice Commission recalled «that, as the ECtHR has held in the case of Wegrzynowski and Smolczewski v. Poland, rectification or an additional comment on the website may be a sufficient and adequate remedy, in which case the access-blocking/removal of content measure may be considered as disproportionate to the legitimate aims pursued by the restriction and thus constitute a violation of the freedom of expression. […]» §37. More generally, it also reminded Resolution 2035(2015) of the CoE’s Parliamentary Assembly on Protection of the Safety of Journalists and of Media Freedom in Europe, where it considered «the generalised blocking by public authorities of websites or web services as a serious violation of media freedom, which deprives a high and indiscriminate number of Internet users of their right to Internet access» §14.

Finally, the Commission urged Turkey for a proper notification procedure «in order to give the content providers the opportunity to have knowledge of the blocking measure and of the reasons put forth by the authorities to justify the measure. […] It is strongly recommended that the provision be amended to impose on the authorities the obligation to notify the interested party about the access-blocking measure and its reasons» §44. Furthermore, the competent criminal court «should be empowered to review the necessity of maintaining the precautionary measure on access-blocking and to lift this measure immediately, if he/she considers that the measure is not necessary in order, for instance, to prevent any risk of irreparable damages pending substantial trial. […] It is not acceptable that the decision taken by a peace judgeship as a “precautionary measure” should be binding on the trial court judge in the substantive criminal proceedings. […]. Concerning the ex officio blocking orders issued by the Presidency of Telecommunication under conditions indicated in Article 8(4), i.e. the content or hosting provider is located outside the country or the content of publications constitutes offences of sexual exploitation of children, obscenity and prostitution, the necessity of this provision is not clear to the Venice Commission. The access-blocking measure under Article 8 is a precautionary measure taken in the framework of a criminal case and it should be the role and responsibility of a judge to assess the necessity of this measure during the criminal trial. The reasons set forth under Article 8(4) do not justify the competence of an administrative body to issue ex officio blocking orders without prior judicial review. This competence of the Presidency should be repealed» (§§51-53).

In Opinion no. 692/2012 adopted on 12-13 October 2013, regarding the Legislation pertaining to the Protection against Defamation of the Republic of Azerbaijan (CDL-AD(2013)024), after having premised that «enjoyment of freedom of expression remains considerably problematic in Azerbaijan. Journalists and the media continue to operate in a difficult environment and selfcensorship is allegedly high among newspaper editors and journalists, in particular those who seek to expose economic and political corruption in the country», as confirmed by Mahmudov and Agazade v. Azerbaijan and Fatullayev v. Azerbaijan, where the ECtHR found that Azerbaijan had violated Article 10 of the ECHR, the Venice Commission asserted inter alia that «Concerning statements made on line […], as there is a clear move from print to Internet journalism, it is increasingly important that equivalent defences are provided in defamation laws to those who act, respectively, as mere conduits for the passage of information on the Internet or who host websites. It is also important that hosts are required to set up an effective (selfpolicing) notice and takedown procedure. Requiring a complainant to go to court to get an order for takedown does not sufficiently protect the right of the person defamed. In addition, this discourages Internet service providers from taking responsibility, once on notice, for the websites they host. More generally, though not legally binding on Azerbaijan, European Union Directive 2000/31/EC and the defences set out therein may be used as a helpful reference in establishing the defences available to the various forms of internet service providers» (§92).

Opinion no. 798/2015 adopted on 19-20 June 2015, on Media Legislation (Act CLXXXV on Media Services and on the Mass Media, Act CIV on the Freedom of the Press, and the Legislation on Taxation of Advertisement Revenues of Mass Media) of Hungary (CDL-AD(2015)015), was no less clear in asserting Internet’s assimilation to the already known media. In particular, after having reported that, according to Section 6 (1) of the Hungarian Press Act, journalists’ right not to disclose their sources of information regard only the sources of «media content providers and the persons they employ under contract of employment or some other form of employment relationship», the Venice Commission observed that «The border between freelance and in-house journalism is blurred, and there is no particular reason why freelance journalists should be excluded from the general rule. Appendix to Recommendation No. R(2000)7 […] [of the CM]defines “journalists” as “any natural or legal person who is regularly or professionally engaged in the collection and dissemination of information to the public via any means of mass communication”. Hence, although the employment relationship with a media outlet is the best proof that the person is a “journalist”, it is not the only possible proof. As explained by the authorities, Section 6 is interpreted as including freelance journalists, professional bloggers and alike; however, it should be more clear from the text of this provision». The Commission added however the reservation that protection of sources remains the privilege of professional journalists.

 

  1. The Venice Commission’s “Principles for a Fundamental Rights-Complaint Use of Digital Technologies in Electoral Processes”

The Venice Commission afforded a different and, as we will see, complementary approach in Opinion no. 974/2019 regarding “Principles for a Fundamental Rights-Complaint Use of Digital Technologies in Electoral Processes”, adopted on 11-12 December 2020. Those principles aimed indeed at pointing out the risks no less than the opportunities which the enormous diffusion of the digital technologies is likely to bring about on the ground of democracy and freedom of expression.

It is no coincidence that the Commission evoked the debate between “apocalyptic and integrated” (Eco) as having taken hold of the relationship between technology and democracy (§ 6), and then reported the recently held opinion that «Digital technologies have reshaped the ways in which societies translate the will of the people into votes and representation, and they have to a large extent changed political campaigning. Even though digital technologies foster some aspects of the democratic contest, they also hamper them. The worldwide pervasiveness of digital technologies has moved the arena of democratic debate to the virtual world, raising many questions about their influence on voter turnout and the need to supervise and regulate online social behaviour»[1].

In pariticular, «The internet clearly affects the ways people communicate, conduct their behaviour and form their opinions. The speed and scope of digital technology has not only transformed the way public opinion can be formed but also provided the means for distorting reality to an extent unknown before in the era of traditional journalism with the imparting of news, information and ideas. The misuse of digital technology to manipulate facts, to spread disinformation in a strategic, coordinated fashion, to conduct surveillance by collecting information from (and about) citizens, and engaging political stakeholder groups, has affected people’s trust in democratic institutions and the rule of law. The impact of digital technology in empowering citizens and democratic representation is questioned in light of the above and the question arises whether or how this technology can be managed to prevent the factors distorting fundamental rights such as freedom of expression, opinion and information and the right to privacy with massive surveillance for political / financial purposes» (§ 16).

Here comes the question of how power can be limited within the web: «The principle of freedom of expression should not be interpreted in the sense that private companies have no responsibility for divulging political information from third parties. As explained in the Joint Report, “the few private actors who own the information superhighways are powerful and deregulated enough to dictate conditions on social, individual and political freedoms, thus becoming a third actor in the democratic arena», and «the use and abuse of personal data for electoral purposes, cloaked as freedom of commerce, might pose a serious threat to free elections and electoral equity at least in three aspects: first, because private actors might use such information to directly exert undue influence on the electoral competition; second, because internet and social media companies, arguing freedom of commerce, might restrict the access to such information according to their political preferences, hence granting an opaque advantage to some parties or candidates over others; and third, because the commoditisation of personal data represents a challenge to the surveillance of money in political campaigns. All these conducts could facilitate, conceal or even constitute offences against democracy that must be prosecuted and sanctioned» (§ 58).

While such recommendation is referred to the current situation, a further step may regard the question of whether and how the few private companies having global control over the flow of information on the web could be limited. At this respect the Commission supports the recent call on member states of the CoE’s Parliamentary Assembly «to break up the monopoly of tech companies controlling, to a great extent, citizen’s access to information and data» in order to ensure an «open and free internet» which «serves the purpose of the voters to become more informed and engaged» (§ 78).

Apart from recurring to antitrust techniques, the Venice Commission’s suggestions are referred to limitations of the single platform, with the following emergence of the alternative between self-regulation and international or national regulations.

According to the Commission, «99. In a mature and full democracy, a content platform or a social network must, as far as possible, guarantee the veracity of published content, or at least warn of the potential risks implied by certain publications or sources. Platforms have already adopted a set of measures such as requiring that political and issue ads be clearly labelled and restricting them to authorised users; deletion of fake accounts; approval of particular content and sources; increasing transparency in the process of buying political ads (buyers, amount, content, etc.). While such initiatives – which have been adopted either voluntarily or to comply with the law – are generally to be welcomed, they also run the risk of placing the responsibility of guaranteeing fundamental rights in private hands.

  1. In any case, it is crucial that the response to the challenges posed by digital technologies on democracy and human rights is not left to self-regulatory mechanisms alone. As has recently been stated by the Parliamentary Assembly of the Council of Europe, “despite this contribution by the private sector, many regulatory problems remain unresolved and can only be tackled through international conventions as well as legislation at national and international level. Best practices and a better security agency co-operation should become normative in the defence of democratic elections.” Furthermore, “researchers and journalists must have better access to data on fake accounts and disinformation without social media companies strictly controlling them. Policy makers cannot regulate what they don’t understand, nor can they implement them and sanction non-compliance without independent checks and controls.” This should also apply to independent election observers (national but also international), while ensuring the protection of freedom of speech and the privacy of users. In addition, transparency and accessibility of private company regulations (e.g. electoral content policies), including appeals mechanisms, and transparency on the data that they remove/allow, need to be ensured».

In the same vein, the Commission favours «cautious, adaptable, and innovative» solutions, such as «specific codes of conduct adopted jointly by companies and public institutions, e.g. the EU Code of Practice on Disinformation and the Code of Conduct on Countering Illegal Hate Speech Online which has been developed by the European Commission in collaboration with several major digital technology companies (Facebook, Microsoft, Twitter and YouTube). The most ambitious task in this area would be the creation of an independent self-regulatory body for social media at international level» (§ 101).

On the other hand, it is noted, «social media companies, search engines, content aggregators and other relevant internet intermediaries need to e.g. state in their agreements the rules that users must abide by, the terms of service governing the use of the social media platforms and what kind of content the company will prohibit (provided that such a prohibition is general and not prohibiting otherwise legal speech), and offering a quick and reliable appeals process for users who believe their content was illegally or improperly blocked or removed. As already mentioned, social media sites have already implemented content-moderation policies under which they remove certain content. Direct incitement to violence or illegal activity is not protected speech, and it can and should be barred from social media platforms and the internet» (§ 102).

 

  1. A complementary approach.

The Commission’s position against whichever limitation posed on the diffusion of Internet by governments such as those of Turkey, Azerbaijan or Hungary appears prima facie countering the principles included in its opinion on fundamental rights-complaint use of digital technologies, among which those tending to demonstrate the need for a regulation of social media companies.

However, such impression fails to consider a fundamental distinction. One thing is to demonstrate in general that Internet should be assimilated to the already known media with a view to ensure the broadest guarantee to freedom of expression, another thing is to recommend measures aimed at granting the veracity of published content and the privacy of users within social networks (rather than of Internet in general). Those measures presuppose free access to the web, and should be viewed as complementary, rather than contrasting, with the assumption that governments should not impose limits to freedom of expression. The objection that whichever kind of regulation of the web, and of the media in general, is as such illegitimate, views everywhere the shadow of censorship. But should the notion of censorship include even rules on the responsibilities of private companies regarding the veracity of published content on the web and the privacy of users? A positive answer conceives freedom of expression as an individual’s right that can never be used as an instrument of power vis-à-vis the message’s recipient. The effects on the latter are thus simply ignored. What it only counts is the full exertion of the individual’s right, even when it corresponds to that of few private companies acting in the global digital market.

The opposite view reflects a more balanced conception, according to which freedom of expression creates connections and exchanges that need to be viewed on both sides. It is not the State that imposes undue burdens on citizens expressing their opinions. It is the recipients of the message, those diffused on the web in particular, that need to be protected from fake news or for what concerns their own privacy.

The Venice Commission’s suggestions reflect a similar approach, which seeks to find a solution to the new challenges afforded by the worldwide diffusion of the digital system without being classified under the label of “apocalyptic” or, to the contrary, that of “integrated”. The quotation of the Umberto Eco’s dilemma may reflect such awareness.

 

[1] Joint report of the Venice Commission and the Directorate of information society and action against crime of the Directorate General of Human Rights and Rule of Law (DGI) on the Use of digital technologies and elections, CDL-AD(2019)016, para. 47.

 

Share this article!
Share.