Overview of online hate speech


“Communications of hatred or denigration aimed towards an individual or an organization based of a group characteristic, such as racial or ethnic background, sexuality, disability, religion, or sexual orientation,” is the definition of hate speech (Person et al., 2016). Online hate speech means the hate speech event that occurs in cyberspace. This kind of event has some similarities with ordinary hate speeches but also has its own characteristics. This article will explore the characteristics and related extension points of online hate speech, and comprehensively understand online hate speech and related governance models with its problems.


Firstly, online hate speech has very strong transmissibility. The ability to disseminate all types of content, including hate speech, has increased as social media in general and the Internet in particular, have emerged as the primary means of communication in daily life.  Similar chances to get information are now available to almost any user with Internet access, whereas in the past, substantial resources and connections were required to successfully spread a message broadly (Person et al., 2016). Natural cues and conventional feedback are less significant in a rapid communication environment provided by social media and technological advances in communication and information  (Oksanen & Keipi, 2013). So the high transmissibility of online hate speech is foreseeable. Secondly, online hate speech has anonymity. Studies conducted more than 30 years ago revealed that unrestrained behavior and angry communications were common in anonymous computer-mediated communication, a phenomenon known at the time as “flaming” (Kiesler et al., 1984, as cited in Matamoros-Fernández, 2017).  Anonymity is a technique that can lower the expense of speech, even when it is unfavourable, because of the relative independence from external pressures related to societal standards, comments, and accountability. Therefore, in order to avoid responsibility and possible risks, acts related to online hate speech are often accompanied by anonymity. Additionally, the online hate speech has obvious regional characteristics. Understanding and addressing hate speech requires in-depth local knowledge because it is highly context-dependent: this knowledge includes understanding local interpretations of hate speech, any laws that may be used to criminalize hate speech, and government policies that restrict free speech (Sinpeng et al., 2021).

The sensitivity of online users to certain words varies among different regions and cultures. For some regions, words that involve discrimination and insults may not cause discomfort for users in other regions (Levin, 2002). The last point is based on the characteristics of online hate speech in platform governance: subjectivity. Platforms provide users with a variety of technological devices for managing and reporting controversial content, including flags, reporting tools, filters, and Word, and these procedures are inherently constrained since they provide no opportunity for open, public debate regarding what constitutes offensive behavior (Matamoros-Fernández, 2017).

Thus, for platform governance, opaque and undisclosed review mechanisms based on commercial or algorithmic confidentiality requirements often result in subjectivity in the online hate speech methods and screening mechanisms of platform governance.

Based on the above analysis, online hate speech has five significant characteristics: high communication, anonymity, regionalism, and subjectivity.

Extension points of online hate speech

A new type of racism that is exhibited through social media and is referred to as “platform racism” results from the interaction between the national specificity of racism and the media specificity of platforms as well as their cultural values. Platform racism has two meanings and is a byproduct of the liberal ideology that has controlled the growth of the internet since its inception. (1) by allowing users to divert their burdens, as well as through their design and algorithms to shape sociability, suggesting that platforms can be used to amplify and create racist discourse; (2) by proposing a governance model that may be harmful to particular communities, as shown by the platform’s ambiguous policies (Sinpeng et al., 2021).

The platform also encourages racist dynamics through its policies, algorithms, and corporate decision-making, as well as through its pricing. For instance, Facebook has created a category called “racial affinity” based on user activity, which marketers can include or exclude when trying to sell people things which means that marketers can reject customers who have “racial affinity” for African Americans or Hispanics in the housing sector, in violation of federal housing and employment laws (Angwin & Jr., 2016). Undoubtedly, this type of business that belongs to specific platform technology functions directly leads to potential discriminatory behavior.

Hate organizations have always been adept at using the media and new technologies. (Person et al., 2016). With the continuous development of online hate speech and the lack of regulatory measures, online hate speech has gradually evolved from individual violence to organized group hatred. These organizations are made up of individuals, are somewhat organized, and openly target particular people or groups despite having various hateful agendas (Levin, 2002)They therefore only exist when there is opposition. Structured hate groups became more active online throughout the 1990s, and there were an average of 400 hate websites at that time, in addition to a large number of KKK, neo-Nazi, racist skinhead, Christian Identity, and black separatist organizations websites (Biddle et al., 2008). 

It is incredibly challenging for internet administration to identify and control websites that exclusively promote hatred in some way. Because hate groups lack definite goals, their appearance serves solely to disseminate the most basic form of animosity. As a result, controlling hate organizations will eventually take center stage in discussions about internet administration.

A Swedish man committed suicide through a webcam in 2010———— although the man had previously stated his intention to hang himself, many did not believe him and encouraged him to commit suicide through malicious comments (projects, 2010). Such self-hatred and online hate speech events may still appear in another form today. In addition to online suicide videos, the 2000s saw an increase in extreme and pathological networks that disseminate knowledge about ways to purposefully damage oneself. Unluckily, the Internet fosters a wide range of groups dedicated to self-hatred, violence, and suicide.  Some of them widely disseminate information about suicide, murder, and death while inciting and teaching others to perform similar terrible acts. For individuals looking for visibility, this content is easily accessible and developed, and distributed in local communities.

The issue of exposure to harm-promoting content is not particularly new. The most fundamental and obvious causes of violent behavior and concomitant self-hatred are frequently cited as violent video games and films (Anderson et al., 2010).

In summary, there are many different offshoots of internet hate speech. “Platform racism,” “hate groups,” and “self-hatred” are only a handful of the various ideas that are closely related. Certainly, a situation like that makes Internet administration more challenging.

Impacts of online hate speech

Previous studies have demonstrated that exposure to hate content and victimization experiences are linked to unfavorable outcomes (Waldron, 2012). For instance, cybercrime, hate victimization, or other types of online harassment can cause sleeping disorders, heightened anxiety, and emotions of fear and insecurity. Online hate has been shown to affect people’s daily actions and the manner in which they interact with their surroundings, in addition to having negative psychological effects and even mental symptoms (Alaska et al., 2023). Research shows that adolescents who have experienced online violence are more likely to become emotionally extremist and that they tend to gradually become part of the perpetrators of online violence (Lewis & Arbuthnott, 2012). So the effects of online violence are actually a vicious cycle. The perpetrators of violence are unscrupulous in their malice toward strangers, and the perpetrators of online violence often suffer unimaginable emotional damage. Thus, anti-social, low self-esteem, hatred, and other negative emotions gradually become a way for people who have suffered hate speech on the Internet to protect themselves and become part of the spread of online hate speech in the future. This will also become a very fragmented point of online governance. For regulators, the people they are trying to protect will one day become bullies. Just like the act of upholding justice in real life, oftentimes governance or the punishment of malice and the occurrence of vicious incidents are misplaced.

Online hate speech governance and problems encountered

In the face of online hate speech incidents becoming more and more common, Internet platforms and governments are actively looking for ways to control and prevent online hate speech incidents. Facebook, for instance, has worked to enhance its human review operations, expand its human review operations, and enhance its content review regulations and accountability procedures in recent years. In order to improve its capacity to confront discrimination at the local and national levels, it has started hiring marketing specialists. In order to better monitor hate speech, the corporation has also strengthened stakeholder engagement. An increasing number of academic institutions and civil society organizations are now collaborating (Sinpeng et al., 2021).

At the same time, governments around the world are also introducing corresponding legal provisions to regulate and supervise the online environment. After the outbreak of malicious incidents in 2019, the Australian government adopted the Sharing of Abrupt Violent Material Act to regulate service providers, hosts, and social platforms to some extent to avoid the spread of malicious content online(Douek, 2020). 

However, for various reasons, the standardized governance of online hate speech always comes with many problems.

In terms of platform censorship, some platforms such as Facebook and YouTube believe that some speech may be excessive to a certain extent but should not be considered a form of hate speech because it falls under the category of humor. Facebook and YouTube, which often use nation-specific banning algorithms, don’t go into additional detail about what constitutes “humor” or “unpopular points of view.” Since using satire and irony to mask racist and sexist criticism is a prevalent practice online that promotes discrimination and harm, the preservation of humor as a guarantor of freedom of expression presents difficulties for those who endure abuse. (Milner, 2013, as cited in Matamoros-Fernández, 2017). This is a multicultural, multilingual, multireligious world with a complicated political environment. People frequently combine several languages. And research shows that although people made some advancements, hate speech monitoring on Facebook is still prone to mistakes because of the language barrier (Wijeratne, 2020, as cited in Sinpeng et al., 2021).

In addition, subjectivity cannot be avoided in the platform’s definition and guidelines for hate speech. Online hate speech contains subjectivity and regionalism, as was indicated in the earlier examination of its qualities. Employees of platforms in the Asia Pacific region have a hard time comprehending hate speech in the Americas (Chmiel et al., 2014). As a result, the manual review procedure is inefficient and inaccurate. When presented with many language combinations, algorithms like algorithmic machines that have not yet reached maturity will lose their ability to discern. Platforms and governments must thus work to find ways to prevent these issues while yet standardizing the management environment.


The above article details the five characteristics of Online hate speech, namely: “communication”, “anonymity”, “regionality” and “subjectivity”. And in the relevant governance models and problem analysis, these characteristics are also influencing each other and becoming issues worth exploring. In addition, this article briefly explores three phenomena that extend from Online hate speech: “platform racism”, “hate organizations”, and “self-hatred”.Although the platform and the government have made corresponding efforts to regulate the governance of online hate speech, there are still many problems surfacing in the process of governance. But along with Ai technology or the improvement of legislation, perhaps one day in the future online hate speech will completely say goodbye to the Internet (Sarpila, 2014).

Alaska, G. G. U. of, Gadgil, G., Alaska, U. of, Gayle Prybutok University of North Texas College of Health and Public Service Department of Rehabilitation and Health Services, Prybutok, G., University of North Texas College of Health and Public Service Department of Rehabilitation and Health Services, Victor Prybutok University of North Texas College of Business Information Technology and Decision Sciences Department, Prybutok, V., University of North Texas College of Business Information Technology and Decision Sciences Department, D., boyd, P.D., A., Al., E., A., A., G., D., R., A., M., A.-M., M., A., R., A., A., A., … Metrics, O. M. V. A. (2023, June 1). Mediation of transgender impression management between Transgender Privacy Paradox and trans facebook persona: : A trans perspective: Computers in human behavior: Vol 143, no C. Computers in Human Behavior. Retrieved April 15, 2023, from https://dl.acm.org/doi/10.1016/j.chb.2023.107700

Anderson CA;Shibuya A;Ihori N;Swing EL;Bushman BJ;Sakamoto A;Rothstein HR;Saleem M; (n.d.). Violent video game effects on aggression, empathy, and prosocial behavior in eastern and Western countries: A meta-analytic review. Psychological bulletin. Retrieved April 15, 2023, from https://pubmed.ncbi.nlm.nih.gov/20192553/ 

Angwin, J., & Jr., T. P. (2016, October 28). Facebook lets advertisers exclude users by race. ProPublica. Retrieved April 15, 2023, from https://www.propublica.org/article/facebook-lets-advertisers-exclude-users-by-race 

Biddle, L., Donovan, J., Hawton, K., Kapur, N., & Gunnell, D. (2008, April 10). Suicide and the internet. The BMJ. Retrieved April 15, 2023, from https://www.bmj.com/content/336/7648/800 

Chmiel, A., Sienkiewicz, J., Paltoglou, G., Buckley, K., Skowron, M., Thelwall, M. et al.(2014) Collective emotions online. In N. Agarwal, M. Lim & R. T. Wigand (Eds.), Online Collective Action (pp. 59–74). Vienna: Springer. DOI: 10. 1093/acprof:oso/9780 199659180.001.0001.

Douek,  Evelyn. 2020. “Australia’s ‘Abhorrent Violent Material’ Law: Shouting ‘Nerd Harder’ and Drowning Out Speech.” 94 Australian Law Journal 41 (August),  https://ssrn.com/abstract=3443220

Levin, B. (2002). Cyberhate: A legal and historical analysis of extremists’ use of com- puter  networks  in  America. American Behavioral Scientist,  45(6),  958–988.  

DOI:10. 1177/0002764202045006004.

Lewis, S. P., & Arbuthnott, A. E. (2012). Searching for thinspiration: The nature of Inter- net searches for pro-eating disorder websites. Cyberpsychology, Behavior, and Social Networking, 15(4), 200–204. DOI: 10. 1089/cyber.2011.0453.

Matamoros-Fernández. (2017). Platformed racism: the mediation and circulation of an Australian race-based controversy on Twitter, Facebook and YouTube. Information, Communication & Society, 20(6), 930–946. https://doi.org/10.1080/1369118X.2017.1293130

Oksanen, A., & Keipi, T. (2013). Young people as victims of crime on the Internet: A population-based  study  in  Finland.   Vulnerable  Children  &   Youth  Studies,  8(4),

298–309. DOI: 10. 1080/17450128.2012.752119.

Person, Teo, M., & Keipi, N. (2016, December 19). Online hate and harmful content: Cross-national perspectives: Teo Ke. Taylor & Francis. Retrieved April 15, 2023, from https://doi.org/10.4324/9781315628370 

projects, C. to W. (2010, October 12). Swedish man uses webcam to broadcast suicide live on internet. Wikinews, the free news source. Retrieved April 15, 2023, from https://en.wikinews.org/wiki/Swedish_man_uses_webcam_to_broadcas

Sarpila, O. (2014). Attitudes towards performing and developing erotic capital in con- sumer  culture.  European  Sociological  Review,  30(3),  302–313.  DOI:  10. 1093/esr/ jct037.

Sinpeng, Martin, F. R., Gelber, K., & Shields, K.(2021). Facebook: Regulating Hate Speech in the Asia Pacific. Department of Media and Communications, The University of Sydney. https://hdl.handle.net/2123/25116.3

Waldron, J. (2012). The Harm in the Hate Speech. Cambridge, MA: Harvard

University Press.

Be the first to comment

Leave a Reply