Online Hate Speech during COVID-19

Jiaqi Song



With the continuous development of modern society, devices such as mobile phones and computers are becoming more and more popular. Nowadays, digital information has gradually replaced the various forms of communication in the past, such as telegrams and letters (Rogers, 2019). Not only can people get inspiration for what they are interested in online, but they can also be harmed by attacks or hate speech. The emergence of social media plays a positive value for individuals and society, and also has a negative side. Although the Internet makes people communicate easier, through the network cannot fully understand who is to communicate with you, in the network world, each of us can separate self and reality, which exacerbated the visibility of the Internet, which is likely because for some opinions inconsistent misunderstanding and contradiction. Therefore, sometimes we can see all kinds of hate speech in our daily use of social media, and the young people who love to play online games will often feel such disturbing language during the game. In this paper, we will explain in detail from four parts: what is hate speech, the harm of online hate speech, case analysis and how to govern hate speech.

The concept and forms of hate speech

First of all, for the understanding of hate speech needs to have certain boundaries, its concept refers to according to race, religion, gender or sexual orientation and other specific characteristics and other problems for a group or individual aggressive speech, inciting phenomenon such as violence, even through subtle form or joke (Fortuna and Nunes, 2018), is the extreme expression of network uncivilized behavior (Coe et al., 2014), there is the risk of threatening social peace and stability. Hate speech has already existed before the emergence of digital social media platforms, and the era of Internet big data provides more convenient ways and tools for online hatred. Now any speech and content can be shared over the Internet by clicking on the screen, without considering the consequences (O’Regan & Theil, 2020). Just like, when we browse social software such as Facebook, Twitter and Weibo, we often see the prejudice and discrimination caused by various reasons. These statements can be conveyed in any way, including images, nicknames, symbols, expressions, which can be transmitted in real-time or offline to different people around the world. Online hate speech is very different from traditional media, and online hate speech can be easily spread and shared in an anonymous way at a low cost.

(Data gathered using Brandwatch Consumer Research across social media sites, forums, and blogs | 2019 – mid-2021)

There are various forms of hate speech on the Internet, and users with different backgrounds and cultural differences have different perceptions, judgments and understandings (Bormann et al., 2021). The main forms of hate speech are defamation / metaphors, such as incite discrimination, belittling and racial hate speech on the Internet, especially attacks on Asian populations during Covid-19. The second point is that online threats of violence are self-harming, debilitating and debilitating. Compared with the first two forms, the pictures and symbols of hatred are the smallest, but there are also geographical differences.

In reading material “Facebook: Regulating Hate Speech in the Asia Pacific” data, points out that in 2020 global about 4 billion Internet users, online social media has become an important platform for people to communicate, connect and freedom of speech. And most of the social media platforms like Twitter, Facebook and Weibo basically all allow users to use them for free and publish and share their free speech ideas and so on. Hatred speech from the endless, some discrimination on the Internet, prejudice or hate speech or content may be in a short time will not get the attention of many people, but as more and more users began to forward, its heat also increased, which can lead to the harm of these comments further upgrade (Coffey and Woolworth, 2004). Such a phenomenon is becoming an increasingly serious cyber security issue, and legal and platform controls are not good enough to reduce such hatred.

Harm of hate speech

Now in people’s daily life, social media has become an indispensable part of everyone. The content posted by individuals will be permanently retained, but at the same time, some content that spreads hate speech, posts and comments also bring a lot of trouble to people’s lives. In people’s daily life now, social media has become an indispensable part of everyone. The content posted by individuals will be permanently retained, but at the same time, some content that spreads hate speech, posts and comments also bring a lot of psychological trouble to people. In my opinion, the first point that causes this phenomenon is that the platform does not play a positive role in management and guidance, and does not restrain the spread of such negative emotions in time. Secondly, now people are under great pressure, work pressure, social competition pressure, academic pressure and so on make some people vent freely on the Internet and has no self-restraint. Third, the regulatory authorities need to strengthen the management of cyberspace, and should resolutely investigate and punish improper and illegal remarks, take the closure if necessary, and strengthen the active publicity on the Internet.

Although It is everyone’s right to express their opinions at will, venting their sadness, anger and other harmful remarks and behaviors on social networking sites through various unconventional means may directly or indirectly affect their physical and mental health, even though it may not be obvious in a short time. When a person is attacked by online hate speech, he may become more self-esteem, anxious, fearful and isolated. If this situation persists for a long time, it may become a chronic poison that leaves them to mental disorders such as chronic depression. Hate speech can do real harm, even if it is published in cyberspace, and it can very well be fatal when it moves offline.

Hate speech against some groups on the Internet may cause a certain blow to them as free and equal members of society, and vulnerable groups may no longer be willing to express their opinions freely. Stigmatizing the hated groups will polarize them and cause some people to leave from such groups, which will seriously affect social cohesion.

(BBC News)

Case analysis

COVID-19 became popular in Wuhan, China, in late 2019, and it spread extremely rapidly, resulting in extremely contagious in Asia. In this pandemic, which has affected the normal lives of millions of people, and even lost their lives, anxiety and panic have led to great hatred and racism towards Asians in many countries and regions around the world. In particular, Chinese and Asian people suffered various forms of discrimination after the covid-19 outbreak, including physical attacks, verbal abuse, vandalism, etc. (Jeung et al., 2021). COVID-19 is known as “Chinese flu” in some media. In the US, for example, while a Chinese-American said she was talking to her mother in Mandarin, a woman walked by her and shouted at her said keep the virus away from me. Racist attacks have also occurred in many countries and regions during covid-19, including Australia, India and France. Such hate speech and behavior also exist on platforms such as online social media. Many people spread malicious comments and misunderstandings about Asians and Asians on the Internet, claiming that COVID-19 is a virus from China, and began to resist any Chinese (He et al., 2021) and see them as spreaders and threats to the virus. Among the people who create these hate speech, they mostly adopt discriminatory behavior and racist hatred through uncertainty and tension.

Due to lockdown measures around the world during the pandemic, many schools, businesses and factories have had to stay at home. Students study online through the Internet, and enterprises and many factory employees also adopt the online mode at home. The Internet has become an important way for people to make comments and receive information from the outside world. Many platforms such as Facebook, Twitter, and YouTube have adopted AI tools to review their content. It also strengthens the identification of hate speech and possible mistakes of COVID-19 by artificial intelligence. But hate speech has developed so rapidly that AI algorithms cannot keep up and delete such comments. However, identifying hate speech through AI is not entirely an effective method, because automatic detection technology is insufficient to identify human emotion and judgment. If some words or letters about hate speech are replaced or deliberately spelled, they will likely not be recognized by the system (Mary, 2019).

In the face of online hate speech and bullying, effective governance becomes very important. Then the next part is about my personal practical experience with the governance problems and solutions of hate speech and how the platform can manage online hate speech through effective means. I am an online game content administrator, and the main thing I often found during my management was that there was too much hate speech, and there was a steady stream of negative comments every day. In addition to the automatic screening of the system, more also requires the manual selection screening of the administrator, and even some hate topics will be forwarded and commented by many users before deleting the comments, leading to the widespread spread of such comments. These negative posts not only need to be deleted in time, but also need to take positive measures to stop them during the management period, such as frequent announcements in the most eye-catching place of the topic page, and repeatedly emphasizing the prohibition of any content related to hate speech.

Governance of online hate speech

In terms of governance online hate speech, platform should not only ensure diversity and freedom of speech also need to correctly guide and safeguard users from reliable channels to obtain information, intensify efforts to identify and prevent the spread of false information, reduce the contradictions of false information, at the same time understand how users view hate speech is also very important (Rafael, 2021). In most social platforms like Facebook, Instagram and Twitter and video game sites YouTube and Twitch comply with the Child Online Privacy Protection Act (COPPA) in order to protect children. For teenagers, their mind is not mature enough and impulsive. Young or long-term use will affect their mental health and happiness, and they do not pay enough attention to their privacy, and may suffer from bullying, harassment or even blackmail and so on.

Take Facebook as an example. As a social media platform with a large number of users, it has taken a lot of measures to control hate speech. For example, through promoting policies to help users understand what kind of speech will be banned, regularly publish rule announcements. Secondly, automated detection was used to identify potential hate speech. In May 2020, it was pointed out that they had strengthened the screening technology of AI systems, which can better understand the meaning of the language and its context. Part of the new system can be combined with images to detect harmful topics. In addition to AI testing, Facebook has thousands of trained and assessed auditors to further ensure that violations can be identified and handled as accurately as possible. Multiple social media outlets have taken penalties, to ensure that hate speech is effectively controlled, such as limiting the user’s account, deleting hate speech content, and banning or closing illegal accounts.

In the end, let’s imagine that a piece of paper is originally flat and clean, and it will become wrinkled. If you try to recover it, you will find that the current paper is very different from the previous one. Therefore, it can be concluded from such a metaphor that no matter what kind of words need to be published after thinking, especially in cyberspace, even if you think your joke will have no impact on others, but it may inadvertently hurt others.


Aim S., Fiona R. M., Katharine G., Kirril S. (2021). Facebook: Regulating Hate Speech in the Asia Pacific. Department of Media and Communications, The University of Sydney

Brown A. (2017). What is so special about online (as compared to offline) hate speech? Ethnicities, 2018, Vol.18 (3), p.297-326.

Laaksonen S., Haapoja J., Kinnunen T., Nelimarkka m. & Pöyhtäri R. (2020). The Datafication of Hate: Expectations and Challenges in Automated Hate Speech Monitoring. Front. Big Data, Volume 3.

Rieger D., Kümpel A. S. & Schmid U. K. (2022). How social media users perceive different forms of online hate speech: A qualitative multi-method study. New Media & Society. 

Sakki I., Castrén L. (2022). Dehumanization through humour and conspiracies in online hate towards Chinese people during the COVID-19 pandemic. Social Psychology, 61(4), p1418-1438.

Solovev K., Pröllochs N. (2023). Moralized language predicts hate speech on social media. PNAS Nexus. 2(1).

Toliyat A., Levitan S. I., Peng Z., Etemadpour R. (2022). Asian hate speech detection on Twitter during COVID-19. Frontiers in Artificial Intelligence. Volume 5. 

O’Regan C., Theil S. (2020). Hate speech regulation on social media: An intractable contemporary challenge. Research Outreach.

Be the first to comment

Leave a Reply