How to moderate hate speech and online harm in the pursuit of free speech?

Donald Trump and Facebook

Freedom of speech in an international context

With the popularization of new media platforms and networks, more people use and express their views through the network, and then the popularization of the network for the dissemination of hate speech to create a convenient, Many people are harmed by the Internet, but also bring a certain degree of negative impact on society, for the network moderation and management of the network in the hate speech and network harm play a crucial role. Especially in the international community, different countries and different platforms for hate speech and other I-regulations are different, due to the history of the countries, cultural differences, the international community on the understanding and governance of hate speech there is also a very big difference. EU countries advocate the legal form of regulating and combating the spread of hate speech, while the United States does not agree with hate speech, and the impact of related regulatory activities on freedom of expression is also difficult to assess, which essentially creates the difficulty of combating online hate speech in the international context. Based on the international community’s understanding of the understanding of hate speech and governance practices, it is proposed that the hate speech constitutive factors be used as the discriminatory conditions, and by combining the governance practices in the Internet platform and the European Union region, the special situation of the domestic governance of hate speech is analyzed. The premise of how we will moderate the network occurrence of hate speech and cyber harm is that we clearly recognize the knowledge and impact on hate speech and cyber harm.

What is online hate speech

How is hate speech defined

We should make a clear distinction between freedom of speech and hate speech, freedom of speech means that we are creating a cyber world where everyone anywhere in the world can express their beliefs and feelings, no matter how unique, and without fear of being forced into silence or conformity (Barlow, 1996). The problem of online hate speech has a long history and is a concrete manifestation of the spread of hate speech in the cyber world. Due to the tense and volatile international situation today, localized conflicts still exist, and the interplay of international issues such as the aggravation of the world’s economic and debt crises, the refugee crisis, the crisis of terrorism, violent crimes, and different religious beliefs, the online hate speech is increasing, and the online social media are characterized by terror, violence, and extremism, which brings unprecedented security to the international community. The infiltration of terror, violence and extremism in online social media has brought unprecedented negative impacts on the security of the international community, and online social media hate speech has gradually come to the attention of the international community.

Hate speech is defined as speech that expresses, encourages, provokes, or incites hatred against a group of people distinguished by a particular characteristic or set of characteristics, such as race, ethnicity, gender, religion, nationalism, or sexual orientation, by launching a direct speech attack against a protected characteristic of another person rather than against an idea or custom” (Parekh, 2012, as cited in Flew, 2021 ) also includes disability speech.

Facebook’s rules and moderation on hate speech

In the case of Facebook, for example, the online platform has clear rules about hate speech behavior. These regulations cover heated rhetoric, dehumanizing statements, harmful stereotypes, and any form of expression that demeans, belittles, disgusts, or disdains others. Also, online platforms firmly prohibit any call to ostracize or isolate others. In addition, social media platforms such as Facebook explicitly prohibit a form of speech known as “harmful stereotyping”, or so-called dehumanizing comparisons. This type of speech is often used to attack, intimidate or marginalize specific groups of people, and is often even closely associated with real-world violence. It’s worth noting that when hate speech also addresses a protected characteristic (such as age), Facebook considers that characteristic to be protected as well. This means that any offensive speech targeting these characteristics will be strictly regulated by the platform. In addition, while platforms such as Facebook are committed to protecting refugees, mobile populations, immigrants and asylum seekers from the worst attacks, they also allow comments and criticisms of immigration policies, and in order to enforce these regulations more accurately, platforms such as Facebook will, depending on the realities and nuances of each region, consider specific words or phrases as attacking groups with a protected characteristic as a Commonly used pronouns. This nuanced approach to regulation helps to maintain harmony and fairness in cyberspace, ensuring that every user can communicate and interact in a safe and friendly environment (Meta, 2024).

Facebook eliminates hate speech when it occurs. In the two months from April through June 2017, the Facebook social media platform deleted an average of about 66,000 posts reported as hate speech per week, and counted the removal of about 288,000 hate speech posts globally. This includes posts that may have been reported for hate speech but were removed for other reasons, but does not include posts that were reported for other reasons but were removed for hate speech (Allan, 2017). It is clear that certain content should be removed if it is hate speech, as it includes direct incitement to violence against a protected characteristic, or insults or dehumanization. Facebook will also report any imminent threat of violence against anyone it finds, including threats based on a protected characteristic of a group of people, to local law enforcement for regulation.

Hate speech linked to politics

Hate speech is interconnected with misinformation and extremist political materials, especially in politically volatile countries, countries with a history of racism, religious and sexism, and countries with mass migration due to war, famine, political persecution and poverty (Singer, & Brooking, 2018). When politicians make inflammatory speeches through hate speech and other incendiary speeches, they act in such a way that deepens the divisive situation of the situation, and this makes the probability of political violence and terrorism in the society to increase, and hate speech occupies a very important place in the statements released by the political leaders of the countries, such as Russia, the United States, Israel, Ukraine, and Iraq, and these statements are not just empty words or political speeches performances. When political leaders use hate speech, there is more terrorism within their countries (Piazza, 2020, as cited in Piazza, 2020) This would be the same not only in the United States but in other countries as well.

Source: Global Terrorism Database, University of Maryland Get the data

Since Donald Trump began running for the U.S. presidential election in 2016, the U.S. has more than doubled the amount of domestic terrorism in the U.S. in 2017 and 2018, and according to the Global Terrorism Database, the U.S. has averaged 26.6 incidents of U.S. domestic terrorism per year during Barack Hussein Obama’s two terms as elected president of the U.S., and the most active year by The most active year so far was 2016, with 67 domestic terrorist attacks in the U.S. The Trump administration is more than double the overall average of the Obama administration, with 66 and 67 terrorist attacks spread over Donald Trump’s election to the presidency in 2017 and 2018, with the data available on the website (Global Terrorism Database, 2020), the language of hate speech is often violent or emotional, hate speech is against the principles of human rights, hate speech has increased exponentially in its reach in all corners of the world in the age of the Internet, and because hate speech fosters mistrust and hostility in society and denies the human dignity of the people who are attacked by hate speech.

Online Harms

Online victimization has become a nuisance for many users of the Internet, with a survey by the Pew Research Center showing that 41% of Americans surveyed in 2021 have experienced some form of online harassment (Center, 2001) The types of online victimization include offensive name-calling, physical threats, prolonged harassment, and sexual harassment. Sexual harassment is verbal, nonverbal, or physical behavior of a sexual nature that occurs online or through digital networking channels. For example sending intimate or explicit pornographic images or videos of another person or the victim via social media or email without their consent, online sexual harassment can have serious emotional, psychological, and social consequences for the victim, who articulates feelings of victimization, humiliation, and fear. The Australian Bureau of Statistics released statistics on sexual harassment on August 23, 2023, which showed that 1.3 million women had been sexually harassed in the last 12 months, and that 57% of these victims had been sexually harassed online, and that not only women had been sexually harassed online, but that 426,800 men had also been sexually harassed in the last 12 months. Sexual harassment (Australian Bureau of Statistics, 2023).


With the increase in internet users and faster transmission speeds in the new media era, computer internet and social media moderation is very important, moderation makes every internet user or social media user responsible for their own community conversations as well as behaviors, and the platform will not survive if it loses moderation (Gillespie, 2018). Platforms can use algorithms to automatically detect and filter out content that violates the normative guidelines of the platform’s community, such as hate speech, nudity, or spam. Social media platforms use human review and decide whether to remove or allow that content to be posted based on the platform’s community norms, such as the Facebook social media platforms of Indian and Filipino underpaid workers who deal with a large amount of user content on a daily basis, including some disturbing sexual and racist images (Punathambekar, 2019, as cited in Curran, 2019). It is not only social media that moderates users’ speech and behavior that violates community rules, but the state also puts a stop to publishers for posting hate speech, for example, in Germany, the law explicitly prohibits incitement to hatred; if a publisher of hate speech posts such content online, he may be the subject of a raid by the German police. On the other hand, in the United States, even the most egregious speech is legally protected by the U.S. Constitution (Allan, 2017).


With the increasing popularity of the Internet and social media platforms, we are calling for freedom of speech at the same time, but also focus on the responsibility of the user as a publisher, publishing hate speech will cause acts of terrorism or violence and other acts, hate speech is a precursor to jeopardize the security of the state and society, not only hate speech will make the people hurt, network harm and other unlawful acts will make the target of the violation of physical and psychological harm. Physical and psychological harm, so abstaining from hate speech and online harm not only rely on the Internet and social media platforms to strengthen big data, artificial intelligence and other emerging technologies to ensure that the content review fair, consistent and effective governance, but also rely on national laws and the international community to pay attention to the common governance, but also more importantly, each user should comply with community regulations, for the Internet and social media platforms to create a safe and civilized Internet and social media platforms to create a safe and civilized online environment.

Rreference List

Allan, R. (2017, June 27). Hard Questions: Who Should Decide What Is Hate Speech in an Online Global Community? Meta.

Australian Bureau of Statistics. (2023, August 23). Sexual harassment, 2021-22 financial year | Australian Bureau of Statistics. – prevalence-rates

Barlow, J. P. (1996, February 8). A Declaration of the Independence of Cyberspace. Electronic Frontier Foundation.

Center, P. R. (2001, July 25). Interaction with the Web by Race and Gender (2001). Pew Research Center: Internet, Science & Tech.

Curran, J., & Hesmondhalgh, D. (3559). MEDIA AND SOCIETY 6th Edition.

Flew, T. (2021). Regulating Platforms. Polity Press.

Global Terrorism Database. (2020). GTD Search Results.

Gillespie, T. (2018). All Platforms Moderate. In Custodians of the Internet (pp. 1–23). Yale University Press.

Meta. (2024). Hate Speech | Transparency Center.

Parekh, B. (2012). Is There a Case for Banning Hate Speech? In The Content and Context of Hate Speech (pp. 37–56). Cambridge University Press.

Piazza, J. (2020, September 28). When politicians use hate speech, political violence increases. The Conversation.

Piazza, J. A. (2020). Politician hate speech and domestic terrorism. International Interactions, 46(3), 1–23.

Punathambekar, A., Mohan, Sriram., & Punathambekar, Aswin. (2019). Global digital cultures : perspectives from South Asia. University of Michigan Press.

Singer, P., & Brooking, E. (2018). LikeWar The Weaponization of Social Media.

Be the first to comment

Leave a Reply