Content Moderation

By Priscilla Ruiz (Digital Rights Legal Coordinator, ARTICLE 19 Mexico & Central America Office)

Different organizations in Mexico and Latin America have been witnesses and critics on how content protected by freedom of expression has been removed illegitimately with the purpose to evade public criticism on their own acting. Although freedom of expression is not absolute and establishes exceptions such as child pornography, war propaganda and hate speech that include incitement to violence as well as public and direct incitement to genocide, these States have taken back these criteria with the goal to remove and eliminate illegal content established by courts that have resolved cases related to freedom of expression as a human right.

However, content moderation [1] has been inserted in a complex and multidimensional system. This system involves [2], in the first place, actions and decisions related to the distribution of content based on rules set in the terms and conditions as well as community norms for services provided by private enterprises in a digital environment that start to mold the public sphere. In second place, regarding the creation and implementation of laws [3] that intend to regulate the same regulatory models generated by the States with the aim to control the environment in which content moderation takes place.

Over two years it has been revealed how this illegitimate practice occurs. On one hand, it occurs when social media platforms suppress content through their terms and conditions or norms, which some of them are incompatible with international human rights standards [4]. On the other hand, in situations that involve governments, local law or internal mechanisms, social media platforms are used to request the elimination of content without any type of previous judicial control to determine the origin of the request, leaving anyone who tried to exercise his or her freedom of expression in a complete state of defenselessness in the digital environment.  The above produces a blockage, destruction or elimination of images, journalist articles, videos or audios of any type; ergo, an illegitimate interruption of the right to information and freedom of expression without taking into account the application of the corresponding legal mechanisms, obviating the ponderation study or without exhausting the application of a proportionality test regarding freedom of expression and copyrights [5].

It is relevant to note that, as a direct effect of content moderation on digital media platforms, arises an inhibitory effect whenever actions like content removal [6] do not let the user fully comprehend the legal reasons and bases for which their content has been removed. It is important to establish that arbitrary content removal not only violates freedom of expression derived from citizen and journalist but it also violates their own copyright publications. These rights are conceived the moment an oeuvre is created through the manifestation of ideas and the expression of authorship in a journalistic note referring to a relevant news issue, and as such, violating moral and economic rights to whoever gets his or her content removed is limiting their own copyright protection. [7]

When State actors or private entities like web hosting services or social media platforms remove content without considering the international human rights standards are restricting freedom of speech in a digital environment, limiting and restricting the flow of information of public interest a democratic society receives. Unfortunately, and against international freedom of expression principles, in the past years the terms and conditions of services, politics and community norms of social media platforms, audio visual channels and web hosting services, these have roughened their internal regulation with the justification of protecting the authors interests referring to intellectual property end even rights related to privacy and personal data.

The extense and ambiguous internal normativity of social media platforms and web hosting services, as well as the intransigent application that has pushed users to have null knowledge about where or how to defend their digital rights against arbitrary mechanisms that go against international human rights standards.

In the last years, organizations that defend digital rights, like ARTICLE 19, have known about journalists and activists whose important informative labor has been suppressed from the internet through a mechanism known as “Notice and Takedown”. Such are the cases in which the Digital Millennium Copyright Act or DMCA against the media (applied against Pedro Canché) and other diverse news journals of San Luis Potosí which ended up in content removal that supposedly infringed copyright of others. In many of these cases, the removed content was owned by the publisher who received the removal notice, which suppressed his or her freedom of expression not only as an author, but also as a citizen.

These notifications were pursuant to American copyright law, although almost none of the journalists were under the assumption of infringing violations of copyright established on the DMCA. Unfortunately, this law affects in a disproportionate way because it allows the owners of copyrights to send a simple notice to the service provider when the content had been published without their consent, so the content could be removed immediately and on occasions without going through a legal content revision process but through an automatized system.

This mechanism concludes with internal decisions that social media platforms and hosting service providers take through other mechanisms like “Notice and Takedown” that far from protecting copyright, acts in an illegitimate, arbitrary way and against international human right standards. The decisions that digital platforms make lack jurisdictional mechanisms that begin with formal and material suppositions of admissibility and origin. The above includes (i) the right to due process as well as transparency, (ii) the application of adequate precautionary measures and (iii) proportionality to decisions and the possibility to appeal the decisions made by digital platforms. However, users do not have access to these procedures and this takes us to the next level, which is to ask ourselves: should we let the state regulate this mechanism or should we let digital platforms use automatic regulation under a human rights spectrum? This is the beginning of a wide discussion where multiple stakeholders should participate.

[1] Content moderation “refers to the decisions and actions that companies take to determine what information is allowed on their platforms […] in the event that they determine that any content violates their laws, community regulations, policies and / or terms and conditions, you can resort to the removal of content, the blocking of user accounts or another measure ”. United Nations, General Assembly, Human Rights Council, Report of the Special Rapporteur on the promotion and protection of the right to freedom of opinion and expression, April 6, 2018, p.3,

[2] “Content moderation is a tool used in order to exclude or prevent access to specific content within a platform or web service. This prerogative is usually used in a discretionary way by companies through their policies of use or terms of service, but sometimes they are also imposed by law and, in some cases, they are exercised due to pressure from governments ”. Guerrero, Carlos, “Content Moderation Policies”, Hyperright, February 25, 2020.

[3] Center for Studies on Freedom of Expression and Access to Information (CELE) and The Initiative for Freedom of Expression on the Internet, The keys of the housekeeper: the strategy of internet intermediaries and the impact on the digital environment,

[4] ARTICLE19. “Facebook community standards”, June 2018,

[5] Article 19 Oficina México y Centroamérica, #LibertadNoDisponible: Censura y remoción… ob. Cit.

[6] The removal of content is understood as the practice to eliminate or restrict the circulation of information on the Internet, using legal frameworks and private mechanisms that limit its access and that are used in an illegitimate and irresponsible way to censor information of public interest that must circulate and remain accessible. This practice originates through three main channels: (i) The first, through requests to remove content from digital platforms, made by the State, persons of public interest or private actors; (ii) The second, through the direct communication of these same actors with journalists or the media to demand the elimination of content on their websites or social media pages; (iii) The third, through the application of the terms and conditions of the service or community regulations of the digital platforms.

[7] The Office of the Special Rapporteur for Freedom of Expression of the Inter-American Commission on Human Rights in its report entitled Standards for a free, open and inclusive Internet, has determined that the “removal of Internet content has an evident impact on the right to freedom of expression, both in its individual and social dimensions, and in the public’s right of access to information. The information removed does not circulate, which affects the right of people to express themselves and disseminate their opinions and ideas and the right of the community to receive information and ideas of all kinds ”. Office of the Special Rapporteur for Freedom of Expression, “Standards for a free, open and inclusive Internet”, Inter-American Commission on Human Rights, March 15, 2017, paragraph 133, p. 54.