Misinformation, COVID-19 and Platform’s Liability in the US

0

As the COVID-19 pandemic continue to disseminate fear all over the world, countries are improving their way to react, focusing on how to deliver true and accurate health information.
Nonetheless, in this context, it must be considered internet platforms as a key channel for the spreading of misinformation.

To clarify, misinformation is considered the spread of false information, regardless of whether there is an intent to deceive; while, on the other hand, disinformation is the deliberate spread of false or misleading information with an intent to deceive.
What is emerged from an accurate research is that COVID-19 disinformation has obtained much more interest compared with news coming from authoritative sources such as the World Health Organization (WHO).

In relation to this, it is possible to observe several harmful effects related to misinformation and disinformation on internet platforms.
For instance, in the UK, the false statement that radio waves emitted by 5G towers make people more vulnerable to COVID-19 has resulted in at least 30 acts of arson against telecom facilities.

Unlike traditional media, online platforms have the opportunity to personalize and automate content by using users’ data related to past online activity. In this way, content personalization algorithms can expose people to the same content or strictly related to it, hence based on disinformation.
In response to the necessity to act against this phenomena, online platforms, governments and health organizations must work together for the purpose of fighting the widespread of misleading information. As concrete actions, several platforms have started to direct users to official sources of COVID-19’s information as well as detecting and removing potential harmful content.

About this, it is remarkable the joint statement signed by Facebook, Google, LinkedIn, Microsoft, Reddit, Twitter and YouTube with government healthcare agencies in order to combat fraud and remove disinformation about COVID-19.
As shown along the previous year, it can be list three main possibilities of collaborative efforts in the platforms-public health authority relationship:

  • Highlighting, surfacing and prioritizing content from authoritative sources; by redirecting users to information from the WHO when people are looking for information and hashtags which are related to COVID-19.
  • Co-operation with fact-checkers and health authorities to flag and remove disinformation (for instance, Facebook has started a co-operation with the International Fact-Checking Network (IFCN) in order to report false information about COVID-19, labelling and notifying who tries to share content deemed false).
  • Offering free advertising to authorities (for instance, Google has offered free advertising credits to national and international authorities).

 

After this quick analysis, several aspects could be analyzed in more detail even if the one that caught my attention is the liability of online platform as concern the spreading of untrue information. In this regard, I wanted to draw attention to what has been done in the USA and the remedies that have been found. Due to Section 230 of the Communications Decency Act 1996, website platforms has obtained immunity from third-party content shared on their space.

In particular, Section 230(c)(1) guarantees that: “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider” as well as companies which assure content and/or services over the internet (including without doubts social media operators) are not regulated by federal agencies.

This is the reason why Members of Congress have focused on Section 230 in order to rethink about social media operators’ content moderation practices.
Unlike what stated at the introduction, social media operators would likely to be included in Section 230 (f)(2) as interactive computer services, including “information service, system, or access software provider that provides or enables computer access by multiple users to a computer server”.

Partially as response to the need to clarify the position of social media platforms to the content published on their space, in May 2020 the former President Trump released an executive order directed towards federal agencies to consider actions regarding Section 230, in particular the scope of immunity provision for online platforms.1

Moreover, in September 2020, the Department of Justice sent draft legislation to Congress for the purpose of mitigating the aim of liability protection stated in Section 230.
In response to the necessity to update these documents, during the 116th Congress, several bills were introduced with the goal to amend Section 230 for two main reasons:

  • to clarify the liability protections interactive computer services receive for hosting or removing specific types of content.
  • to introduce legislation focused on addressing COVID-19 misinformation.

 

Having a look to the proposals, the majority of them are in favour of amending Section 230 in order to narrow the scope of liability protection, for instance by preventing the removal of specific categories of content. Important suggestions to tackle issues related to amending Section 230 have been suggested by academics and researchers.

One of the most controversial aspects is that amending Section 230 in order to moderate misinformation or limiting the responsibility of computer services when they are called upon to remove certain content, would entail the operation of the service offered by these entities.
What emerges is how remedies in one direction or another would lead to unthinkable consequences. The most innovative solutions are, for instance, those in favor of setting up a federal agency to regulate social media operators or those proposed by the Hoover Institution, the most interesting of which include defining rules for operators based on size and reach; allowing users to customize algorithmic filtering or curation settings; and opening the raw, unsorted, and uncurated content feeds of dominant platforms to allow others to build customizable services that users may choose based on their content preferences.

Amending Section 230 to address misinformation on interactive computer services, either to increase or limit moderation, could affect not only social media platforms, but also many other types of entities, potentially including search engines, internet service providers and the comment section of websites.

If Congress’ intention is making changes to Section 230 to apply only to social media platforms, it must face the issue of developing a definition of “social medial platforms” that distinguishes these platforms from other interactive computer services.
In conclusion, one remarkable aspect that must be considered is that building a legislation to address the activities of U.S.-based social media sites in other countries may be difficult, particularly if another country seeks to impose obligations that are in conflict with U.S. law. On the other hand, it may not be possible for U.S. legislation to regulate the internal activities, such as algorithms or content moderation practices, of foreign-based social media platforms.

Share this article!
Share.

About Author

Leave A Reply