In the present age of the Internet, the web and social media have brought us colossal convenience – social media has provided a virtual mode of social interaction for most people and a relatively simple way to access them (Tan, 2022), making it easier and faster for people to reach a wide range of information, but it also brings many downsides. One of those that cannot be ignored is the spread of online hate speech, especially related to racism – which has become an increasingly prominent problem from Burma to India, from the United States to Africa, and throughout Europe. In recent years, hate speech about racism has exploded in number on major social platforms and appears to be beyond the capacity of the state, companies, or civil society to mitigate its spreading and impact (Jakubowicz, 2017). In this paper, we focus on racist-related online hate speech on social platforms, why online racism is becoming more prevalent as the Internet develops, as well as how governments, social organizations, and the social media themselves can respond.
What is Hate Speech
“Hate speech is defined as the expression, dissemination, and promotion of hatred against groups of individuals distinguished by a particular characteristic or a range of characteristics, such as race, ethnicity, gender, religion, nationality, and sexual preferences (Flew, 2021).”
While the Internet and digital communications can bring a wealth of information, provide relaxation and entertainment, and expand the ability and opportunities for communication and engagement, they also expose users to the possibility of being attacked and harmed, amplifying the harassment of hate speech, race, and bigotry, just as U.N. Secretary-General Antonio Guterres has stated: “Social media provides a global megaphone for hate” (2021).
The harm caused by racist hate speech
The continued proliferation of racism-related hate speech has been of particular concern in recent years. Racism seems to be amplified in the social media environment – here are some examples of racist hate speech online:
1. Asians were assaulted in the age of Covid.
– Hate speech based on race and racism has increased by 28% in the U.K. and the U.S.
– 2019 to mid-2021 – An average of one new post about race or racial hate speech every 1.7 seconds.
– Anti-Asian hate speech grew by 1,662 percent in 2020 compared to 2019. This shows just how fast hate speech is expanding.
According to a report by Ditch the Label (2021), anti-Asian sentiment and hate speech reached their peak globally during the Covid-19 pandemic. Former U.S. President Donald Trump referred to the virus as the “China virus” and the “Kung Fu virus” in his tweets, which led to a growing number of followers on Twitter and a “dramatic rise” in anti-Asian-themed tags on Twitter (Hart, 2021).
Examples of relevant racist hate speech on the Twitter platform are as follows:
“Asian dogs who brought the Covid virus.”
“Go back to China, you immigrants.”
“Chinks, stay away from me.”

2. Christchurch mosque shootings in New Zealand
The harms of online hate speech are not only limited to offensive comments against others online but, in severe cases, are likely to lead to actual offline damage, causing social harassment and bullying problems.
On March 15, 2019, the Al-Noor Mosque and Linwood Islamic Centre in Christchurch, New Zealand, were terrorized in two massive sequential shootings that killed 51 people and injured 49 others. The shooter, Brenton Harrison Tarrant, was from Grafton, New South Wales, Australia, where he grew up expressing racist thoughts and became deeply involved in a global alternative right-wing culture growing up (Mann et al., 2019).

Figure: New Zealand Mosque gunman pleads guilty to murder, terrorism. PBS News, 2020.
Behind these two horrific terrorist assaults were not only matters of right-wing ideology and racialism but also a severe issue of cybersecurity–social media platforms greatly facilitated this event. The perpetrator was filming on video and sickeningly lived on Facebook live before and during the shootings, which had already killed and injured ten people. Moreover, before the attack, links in an 87-page manifesto full of anti-immigrant and anti-Muslim ideas on 8chan, a forum that often posts racist and extremist posts, led users directly to a live stream of that shooting scene.
Social media has undoubtedly become a weapon of terrorism in this case. About 4,000 people watched the live stream on Facebook, and about 1.5 million video retweets and uploads occurred in the following 24 hours (Hoverd et al., 2020).
Why is hate speech becoming more rampant in the age of the Internet?
The history of racist hate speech can trace back to before World War II and is no longer a new phenomenon. Still, the Internet platform has provided new features that are more challenging to control – online hate speech is more sophisticated and global than most pre-digital era speech. The following is a discussion of three reasons why hate speech has become more aggressive in the digital age.
First, it is now cheaper to post such speech online than in traditional media. The anonymity and greater accessibility of the Internet give platforms for identity-protected expression and racist attitudes online (Castaño-Pulgarín et al., 2021). From this perspective, social media does encourage egregious behavior.
It is common for people who are obsequious in the real world to be fierce in their fight against others when they go online. Sociologically, social media is a fragile establishment that does not rely on cooperation to survive as a hunter-gatherer society does. Social media interacts with each other at a realistic distance, with relative anonymity and little reputational impact or punishment for bad behavior (Vince, 2018). Because of this anonymity of online platforms, the family, friends, and acquaintances around you may not be able to know what wildly radical role you may play, what evil things you may say, and how mean and nasty you may behave online. You also may not be judged and blamed in real life at all.
The second point is that social platforms and the Internet can unite isolated individuals with the same interests and beliefs, providing them with a space to interact. For those with minority beliefs, social platforms can normalize their worldview and become more and more determined in their thoughts and goals (Jenni & Tara, 2019).
The third point is that the Internet can stimulate emotions, especially anger. The likes and positive comments people receive for posting on online platforms can be seen as positive feedback, giving them a sense of “approval” that can boost their moods. Recent studies have shown that messages posted with moral and emotional words are more likely to spread on social media. Every tweet’s expression of righteousness or emotion increases the retweet rate by 20% (Vince, 2018).
This may also be why isolated individuals can become aggressive and habitually vent their anger online or even do shockingly violent and bloody things in the real world due to social platforms.
How to counter hate speech online?
1. The Governments and organizations
The relevant legislation of states and governments is a powerful weapon to regulate hate speech. Legal recourse is crucial to counter hate speech (UNESCO, 2021). However, The International Convention on the Elimination of All Forms of Discrimination does not state that it directly addresses online issues. The position of countries and governments towards hate speech varies. Implementing some regional agreements and national laws related to hate speech regulation also varies from region to region. Implementing some regional agreements and federal laws on hate speech regulation also varies from region to region. In the E.U.’s 2008 Council Framework Decision, for example, most of the principles are “recommendations” and do not address the issue of online activity. But Germany’s approach to regulation is particularly instructive for other countries. The Netzwerkdurchsetzungsgesetz (NetzDG) law, introduced in Germany and passed at the end of June 2017, came into force in early October and required social media companies and websites to remove explicitly hateful comments within 24 hours, with offending companies and websites facing fines of up to €50 million (£44.3 million). Although this law is not flawless, it strongly indicates Germany’s tough stance on tackling online hate speech and letting those who break the law know that the Internet is not a lawless place. So far, more and more countries have set up non-profit regulatory organizations against online hate speech, passed legislation against restricting internet operators and service providers from regulating online statements, and governing hate speech must be supported by legal means.
2. Responsibilities of social media
Social media platforms should take greater responsibility for spreading hate speech.
In the case of the Christchurch attack, New Zealand Facebook spokeswoman Mia Garlick said they quickly removed the perpetrator’s account and videos after receiving a reminder from New Zealand police. Yet, the videos could still be found on social media platforms, including Twitter, several hours after the attack (Jenni &Tara, 2019)-which raises the question of how far Facebook has gone in monitoring this and whether it has the power to prevent the spread of undesirable information.
The E.U.’s approach is “dissemination, promotion, and removal of content …… We want platforms to be transparent about how their algorithms work” (Tan, 2022). When dealing with hate speech, it is not enough for internet companies to block user accounts and remove content; they should also increase transparency to enhance their accountability.
3. The use of Artificial Intelligence
Manual detection and treatment of hate speech, while more accurate in capturing it, is a time-consuming, labor-intensive task that could be more efficient and affordable to perform labor-wise.
Current social platform companies primarily use artificial intelligence to detect and review hate speech – filtering and censoring users’ posts through machine learning keyword filtering, analyzing large amounts of natural language data, and performing sentiment analysis on specific texts and topics so that hate speech is dealt with before users see it, more effectively prevent the spread of hate speech (UNESCO, 2021).

But A.I. also has limitations, and social media cannot rely entirely on A.I. to regulate hate speech. First, the definition of “hate speech” is a controversial concept in itself. While the European Union and the United Nations, as well as many scholars, have continued to refine and flesh out the definition of hate speech, it remains a very controversial term – for example, as Howard Winant (2005) stated, “global racial conditions are inherently fluid, contradictory, and controversial. ……”, how to define “minorities” in different countries and different specific contexts, what is the line between an offense against a particular religion and an offense against its adherents… …these are all vague and tangled discussions. The second point is that artificial intelligence still has many technical dilemmas. For instance, Facebook failed to do a timely job of removing videos of the Christchurch shooting because “platforms like Twitter and Facebook rely on automated software to remove this material, and if horror videos look like video games, it is hard for automated classifiers to distinguish the difference between them ( Jenni &Tara, 2019).” Further, A.I. can make mistakes and potentially fail to capture hate speech or remove non-hate content, undermining users’ right to free address.
Conclusion
Hate speech is an unavoidable problem in the digital age – the Internet provides a vast platform for the dissemination of racism and hate speech and can be a great stimulus for negative emotions; online racism can lead to mental health problems in the mildest cases and serious tragedies like the Christchurch shooting in New Zealand. The persistence of racist-related hate speech reminds us that there is still a long way to go in regulating and responding to hate speech and that national governments, social media, and individuals should consider the possibility of racism, take responsibility and remain vigilant to continue to effectively combat the wave of hate speech in the digital age through more vital legislation, increased transparency and the use of artificial intelligence.
Reference
Alkiviadou, N. (2019). Hate speech on social media networks: towards a regulatory framework? Information & Communications Technology Law, 28(1), 19-35.
Bliuc, A. M., Faulkner, N., Jakubowicz, A., & McGarty, C. (2018). Online networks of racial hate: A systematic review of 10 years of research on cyber-racism. Computers in Human Behavior, 87,75-86.
Bundesministerium der Justiz. (2017). Netzwerkdurchsetzungsgesetz (NetzDG) law.
https://www.bmj.de/DE/Themen/FokusThemen/NetzDG/NetzDG_node.html
Castaño-Pulgarín, S. A., Suárez-Betancur, N., Vega, L. M. T., & López, H. M. H. (2021). Internet, social media and online hate speech. Systematic review. Aggression and Violent Behavior, 58, 101608.
Ditch the Label. (2021). Uncovered: Online Hate Speech in the Covid Era. https://www.brandwatch.com/reports/online-hate-speech/view/
European Union. (2008, November 28). Council Framework Decision 2008/913/JHA. Combating certain forms and expressions of racism and xenophobia by means of criminal law.
Flew, T. (2021). Regulating platforms. John Wiley & Sons.
Hart, R. (2021). Trump’s ‘Chinese Virus’ Tweet Helped Fuel Anti-Asian Hate on Twitter, Study Finds. Forbes. https://www.forbes.com/sites/roberthart/2021/03/19/trumps-chinese-virus-tweet-helped-fuel-anti-asian-hate-on-twitter-study-finds/?sh=817795a1a7c2
Hoverd, W. J., Salter, L., & Veale, K. (2020). The Christchurch Call: insecurity, democracy and digital media-can it really counter online hate and extremism? SN Social Sciences, 1(1), 2.
Winant, H (2005) The New Politics of Race: Globalism, Difference, Justice. P. xv.
Jakubowicz, A. (2017). Alt_Right White Lite: trolling, hate speech and cyber racism on social media. Cosmopolitan Civil Societies: An Interdisciplinary Journal, 9(3), 41-60.
Jenni,M., & Tara, M. (2019). ‘How the Christchurch terrorist attack was made for social media’. CNN Business.
https://www.cnn.com/2019/03/15/tech/christchurch-internet-radicalization-intl/index.html.
Mann, A., Nguyen, K., & Gregory, K., (23 March 2019). “Christchurch shooting accused Brenton Tarrant supports Australian far-right figure Blair Cottrell”. Australian Broadcasting Corporation. Archived from the original on 23 March 2019. |
Pacheco, E., & Melhuish, N. (2019). Measuring trends in online hate speech victimization and exposure, and attitudes in New Zealand. Available at SSRN 3501977.
Saleem, H. M., Dillon, K. P., Benesch, S., & Ruths, D. (2017). A web of hate: Tackling hateful speech in online social spaces. arXiv preprint arXiv:1709.10159.
Tan, R. (2022). Social media platforms duty of care-regulating online hate speech. Australasian Parliamentary Review, 37(2), 143-161.
United Nations. (2021). Understanding hate speech.
United Nations Educational, Scientific and Cultural Organization. (2021). Addressing hate speech on social media: contemporary challenges.
https://unesdoc.unesco.org/ark:/48223/pf0000379177
Vince,G. (2018, April 29). Using evolution to explain why we are different when we surf online. BBC News Chinese. https://www.bbc.com/ukchina/simp/vert-fut-43939866
Be the first to comment