With the popularity of the Internet and the globalization of social media, freedom of speech has become the label of the Internet and many social media. Individuals can now express themselves freely online, which has resulted in hate speech, online harm, and the spread of harmful content. Hate speech promotes or encourages discrimination, hostility, or violence against a group of people based on race, ethnicity, religion, sexual orientation, gender identity, or any other characteristic. Online harm, on the other hand, refers to a range of negative experiences that can be caused to an individual online, such as cyberbullying, trolling, doxing, and revenge porn. Hate speech is widely regarded as a form of expression that is more serious than physical harm because the mental toll caused by verbal attacks is a sustained long-term injury (Sinpeng et al., 2021). As such, the online harm caused by hate speech cannot be underestimated and should be broadly addressed and supported by policy (Sinpeng et al., 2021). Terry Flew (2021) believes that policies related to digital platforms and the Internet need to pay attention to the role played by ideology and ideology in-network practice (Flew, 2021). This blog will mainly explain the impact and spread of hate speech and analyze the importance of policy management on the space of hate speech and online harm through a case study of digital platforms.
Spread of Hate Speech
With the popularity of the Internet and the globalization of social media, freedom of speech has become the label of the Internet and many social media. Individuals can now express themselves freely online, which has resulted in hate speech, online harm, and the spread of harmful content. Hate speech promotes or encourages discrimination, hostility, or violence against a group of people based on race, ethnicity, religion, sexual orientation, gender identity, or any other characteristic. Conversely, online harm refers to a range of negative experiences that can be caused to an individual online, such as cyberbullying, trolling, and doxing. And The main spreaders of hate speech can be divided into individual users and group organizations. The main avenues for individual users to spread hate speech are social media and online platforms, and the nature of the Internet has led to the online harm of hate speech. The anonymity of the web makes it easier for individual users to say or engage in harmful behaviour without fear of being identified or held accountable. According to Zimbardo’s (1969) theory of deindividualization, the state in which individuals or groups are not seen or attended to is called deindividualization (Christopherson, 2007). Thus, in a state of deindividuation, users may have fewer intrinsic constraints on doing various things (Christopherson, 2007). And anonymity encourages users to actively express personal thoughts or create diverse content, fueling hate speech.
On the other hand, another factor contributing to hate speech and online harm is the echo chamber effect of the web (Cinelli et al., 2021). The echo chamber effect exists in all forms of social media. According to group polarization theory, the echo chamber effect can serve as a mechanism to reinforce existing views within an individual or group and push hate speech into a more extreme position (Cinelli et al., 2021). As a result, social media algorithms often present users with content that reinforces their existing beliefs, which can lead to echo chambers. Furthermore, echo chambers can make it easier for individuals to engage in harmful behaviour because they are surrounded by like-minded people who may encourage or validate their behaviour.
An effective way for individuals to express hate speech is through comments. Because of the anonymity and echo chamber effect, words can often attract more users to express personal opinions or agree with another user’s statements. Therefore, instances of comments causing online harm to individuals or groups seem to be expected. Earlier, Chinese media reported that a 24-year-old female college student was depressed and committed suicide because of online violence just because she dyed her hair pink. The 24-year-old female college student showed the acceptance letter of the postgraduate to her grandfather on the hospital bed, filmed the heartwarming scene, and uploaded it on the Internet. However, the comments below the picture are not the blessings of netizens but criticism and malicious speculation about her pink hair. Although the pink-haired girl committed suicide because of these negative comments, the netizens who commented did not pay any price or legal responsibility. Therefore, the low cost of cyber violence has continuously relaxed the constraints on the speech and morality of individuals.
In addition, the Internet has many different groups or organizations, including democracy advocates and racist groups, who all realize the influential role of the Internet in disseminating ideas and opinions (Tsesis, 2022). Hate groups use this relatively inexpensive medium to spread ideology and demeaning speech based on information such as race, ethnicity, nationality, gender, and sexual orientation (Tsesis, 2022). Individuals with racist tendencies can anonymously participate in racist group activities and spread related images and texts. It can be argued that the Internet provides a global platform for these advocates of inequality and discrimination and is widely used by hate groups calling for race wars (Tsesis, 2022). According to AP News data and reports, there were 838 active hate groups across the United States in 2020, and many have turned to social media platforms and encrypted apps to prevent tracking. The echo chamber effect also applies to hate groups, and the Internet has enormous cohesion, albeit in a pejorative sense for hate groups. However, we must admit that the Internet has enabled these hate-speech individuals to gather and form an organized and planned group quickly.
The impact of hate speech
Hate speech and online harm have personal, social, legal, political, and national consequences that can be significant and long-lasting. For example, personally targeted hate speech can lead to anxiety, depression, and a decreased sense of self-worth. In extreme cases, hate speech can lead to physical violence and death, such as the pink-haired girl being killed by online violence. Likewise, victims of online harm may experience severe emotional distress and may be harassed and threatened both online and offline. Online damage can also negatively impact an individual’s professional and personal life and mental health.
On the other hand, hate speech groups can lead to social stability and political unrest. Hate groups spread hate speech online and gather illegally offline, seriously damaging public order and safety. For example, five people were killed, including a Capitol police officer, after Trump supporters and members of far-right groups violently stormed the U.S. Capitol, according to the SPLC report. Federal authorities have arrested more than 160 people and linked about 30 defendants to a group or movement. Evidence shows the riot is closely related to white supremacist racist groups. The Law Center confirms that the final year of the Trump presidency systematically counted racism and used racist conspiracy theories and white nationalist ideology as a political tool (Morrison, 2021); this had a particular impact on American politics and society.
Academic research shows that the harm caused by hate speech can be broken down into two categories, causal and constitutive ((Sinpeng et al., 2021). Causal damage focuses more on the consequences of injury to individuals or groups caused by hate speech (Sinpeng et al., 2021). In other words, causality emphasizes the indirect harm caused by address to the target group, such as supporting discriminatory speech against the target group or inciting acts of discrimination against the target group (Sinpeng et al., 2021). In extreme cases, causal hate speech can lead to physical violence (Sinpeng et al., 2021). Constitutive harm means that the speech made by the hater directly causes damage to the target group. That is to say, the address itself is harmful (Sinpeng et al., 2021). For example, hate speech demeans, discriminates, and smears individuals or specific groups of people. In general, hateful or discriminatory address robs them of their equality and sense of freedom (Sinpeng et al., 2021). At the same time, the degree to which hate speech affects victims depends on the speaker’s context and the victim’s reaction (Sinpeng et al., 2021).
Platform policing for hate speech
In response to hate speech and online harm, various social media platforms have introduced corresponding regulatory systems to manage online. Media platforms govern the publication of hate speech content mainly through the following methods: content removal, warning labels, account suspension, artificial intelligence identification, and cooperation with NGOs and experts. Some platforms have strict policies on hate speech and remove any content that violates those policies. For example, Facebook’s Community Standards prohibit hate speech as “speech based on protected characteristics — race, ethnicity, national origin, religious beliefs, sexual orientation, caste, gender, gender, gender identity, and serious illness or disability.” If the content violates these standards, it will be removed.
Some platforms also use warning labels to flag content containing hate speech or misinformation. For example, Twitter may place a warning label on a tweet that violates its anti-hate speech policy, along with a message explaining why the tweet was flagged. Some platforms use warning labels to flag content that contains hate speech or misinformation. For example, Twitter may place a warning label on a tweet that violates its anti-hate speech policy and a message explaining why the tweet was flagged. In addition, some platforms use machine learning and artificial intelligence to identify and flag content that contains hate speech. For example, Instagram uses machine learning to identify and remove comments that violate its anti-hate speech policy. In addition, the platform works with NGOs and experts to develop policies and guidelines to regulate hate speech. For example, in 2019, Twitter partnered with the Anti-Defamation League to create new approaches and tools to identify and combat hate speech on its platform.
State Policy About Facebook
The increase of hate speech worldwide and the severe consequences of the fermentation of hate speech have made various countries have to strengthen their vigilance. Public and government attention has always focused on how to prevent hate speech from affecting people’s health, social stability, and national security. Facebook has a huge user base worldwide, and as one of the largest social media platforms in the world, it plays an essential role in dealing with hate speech. Different countries govern, and police hate speech on Facebook in different ways. The United States is where Facebook is headquartered, and the laws of this country have obvious regulations on freedom of speech. In the U.S., Facebook generally only deals with hate speech that involves real threats or direct attacks on individuals.
Regarding political content, Facebook generally does not restrict speech, even if it involves hate speech against a particular group. Germany is very strict about hate speech, especially Nazism in World War II. In 2018, Germany enacted the so-called “Network Enforcement Act”, requiring social media platforms such as Facebook to delete content involving hate speech within 24 hours. Otherwise, they will face huge fines. However, the bill has also been criticized for risking restricting free speech. The UK also has strict rules on hate speech on Facebook. In 2017, the UK Home Office issued the Cyber Violence and Hate Speech Action Plan, requiring social media platforms to take measures to combat hate speech and online bullying. In addition, the United Kingdom has also established an online abuse watchdog to monitor the behaviour of social media platforms. Singapore has stringent restrictions on freedom of speech. Anything that involves hate speech can be considered illegal in the country. In 2019, Singapore promulgated the “Combating False Information and Network Manipulation Act”, requiring social media platforms to delete content involving hate speech and false information. Otherwise, they may face huge fines. In general, there are differences in the governance and supervision of hate speech on Facebook due to different countries and cultures. When dealing with hate speech, it is necessary to balance the relationship between freedom of speech and the fight against hate speech to ensure that freedom of speech is not excessively restricted while protecting the rights and interests of vulnerable groups.
The online harm caused by hate speech is persistent and has many dimensions. The Internet’s anonymity and echo chamber effect allow hate speech to spread rapidly. The spread of hate speech harms individuals, groups, and even countries. Although relevant media platforms and governments constantly improve regulatory systems and legal policies, online harm still increases. Identifying hate speech is a challenging problem for governments and venues from all walks of life. The line between free speech and hate speech requires constant thought and practice.
Credit: Anna Drozdova Getty Images
Christopherson, K. M. (2007). The positive and negative implications of anonymity in Internet social interactions: “On the Internet, Nobody Knows You’re a Dog”. Computers in Human Behavior, 23(6), 3038–3056. https://doi.org/10.1016/j.chb.2006.09.001
Cinelli, M., De Francisci Morales, G., Galeazzi, A., Quattrociocchi, W., & Starnini, M. (2021). The echo chamber effect on social media. Proceedings of the National Academy of Sciences, 118(9), e2023301118. https://doi.org/10.1073/pnas.2023301118
Flew, T. (2021). Regulating Platforms.
Hate speech policy—YouTube Help. (n.d.). Retrieved 10 April 2023, from https://support.google.com/youtube/answer/2801939?hl=en
Istagram. (n.d.). Tackling Abuse and Hate Speech on Instagram | Instagram Blog. Retrieved 10 April 2023, from https://about.instagram.com/blog/announcements/an-update-on-our-work-to-tackle-abuse-on-instagram
Meta. (n.d.). Hate speech: Publisher and Creator Guidelines. Meta Business Help Centre. Retrieved 10 April 2023, from https://en-gb.facebook.com/business/help/170857687153963
MORRISON A. (2021, April 20). Hate groups migrate online, making tracking more difficult. AP NEWS. https://apnews.com/article/hate-groups-decline-migrate-online-c8683e13fb094155c011835b49b9676a
Sinpeng, A., Martin, F. R., Gelber, K., & Shields, K. (2021). Facebook: Regulating Hate Speech in the Asia Pacific. Department of Media and Communications, The University of Sydney. https://r2pasiapacific.org/files/7099/2021_Facebook_hate_speech_Asia_report.pdfLinks to an external site.
Toor, A. (2016a, July 13). German police raid homes over Facebook hate speech. The Verge. https://www.theverge.com/2016/7/13/12170590/facebook-hate-speech-germany-police-raid
Toor, A. (2016b, September 22). Facebook is expanding its campaign to combat hate speech. The Verge. https://www.theverge.com/2016/9/22/13013440/facebook-hate-speech-campaign-expansion
Toor, A. (2017, June 23). Facebook launches program to combat hate speech and terrorist propaganda in the UK. The Verge. https://www.theverge.com/2017/6/23/15860868/facebook-hate-speech-terrorism-uk-online-civil-courage-initiative
Tsesis, A. (2022). Destructive Messages: How Hate Speech Paves the Way For Harmful Social Movements. New York University Press. https://doi.org/10.18574/nyu/9780814784297.001.0001
Twitter’s policy on hateful conduct | Twitter Help. (n.d.). Retrieved 10 April 2023, from https://help.twitter.com/en/rules-and-policies/hateful-conduct-policy