Social media platforms are being exploited as weapons? Hate speech and online harm analysis in Twitter during the Covid-19 era

With the advancement of technology, the number of users of social networking site platforms such as Facebook, Twitter, Instagram, and Google has increased dramatically. Digital communication technology has made a significant contribution to bringing people around the world closer together, and it is also an inexpensive and effective means of communication. Social networking sites are public places where the masses share their latest news, opinions, and other details, creating a community without borders.

The diversity and popularity of social media platforms have given people multiple channels to receive and share information. Twitter as a popular social media platform has become the platform of choice for all professionals in society, such as journalists and politicians, to express their views and receive information because of the way it encourages expression, participation, and debate (Daniel, 2021). However, because of the way social media platforms such as Twitter are set up for users’ freedom of expression, many social media platforms have become venues for violence and discrimination. In recent years, Twitter’s zealous setting for freedom of expression has turned the platform into a breeding ground for abusive, harassing, and hateful speech (Daniel, 2021). This has raised concerns as to whether social media platforms could become a weapon to attack certain specific groups of people. The hate speech against Asian communities, particularly Chinese, during the Covid-19 pandemic also confirmed that the concerns were valid and served as a reminder to re-examine the regulation of social media platforms and their use.

What is hate speech in social media platforms? Based on social media platforms

Figure 1. (Miller, 2021)
Sources: Jenesse Miller

Hate did not come into existence with the advent of the Internet and social networks. But the advent of the Internet and the subsequent emergence of social networks has added new complexity to the topic of hate speech (Natalie,2019). According to Therese and Simon (2019), social media is a new platform spawned by the internet where information, opinions, and ideas can be easily shared. This new platform provides users with accessible ways and means of self-expression, thus giving them the opportunity to reach out and participate in society and politics. However, it has also led to the spread of hatred, giving the internet a platform to spread messages of hate and allowing them to spread quickly and widely (Therese & Simon, 2019). So what kind of message content is hate speech?

Hate speech is defined as ‘the expression, promotion, provocation or incitement of hate speech against groups distinguished by specific characteristics such as race, ethnicity, gender, religion, nationality, and sexual orientation’ (Parekh, 2012, as cited in Teery, 2021, p. 91).

Hate speech that is implicitly or explicitly directed at a particular group of people and stigmatises that particular group as having qualities that are widely perceived by the general public as affecting social order or bad behavior (Parekh, 2012, as cited in Teery, 2021, p. 91). Hate speech may not be overtly abusive or violent, but the content it ultimately encodes for its audience is violent or emotive. Also, Hate speech on social media platforms takes many forms. Much of the hate speech on social media platforms is conveyed through ‘scientific, jokey, or satirical’ language, which encourages audiences to hate a particular target or group (Teery, 2021).

Hate speech against Asian groups in Twitter during the COVID-19 pandemic 

During the COVID-19 pandemic, the number of people using social media platforms increased due to frequent regional lockdowns, and there was a greater reliance on getting information from the Internet. Twitter is renowned as a real-time public network where news information often appears on the platform ahead of the official news media (Juan Carlos et al., 2019). Due to the short message limit and unfiltered premise of Twitter, its use increased rapidly, especially during COVID-19 when on average 500 million tweets were posted daily on the platform (Juan Carlos et al., 2019). Even during the COVID-19 pandemic, Twitter reached a new high of 166 million registered users per day, up from 152 million at the end of 2019 and up 24% from a year ago (The Washington Post, 2020). However, During the COVID-19 pandemic, Twitter was used by some ulterior motives as a weapon to attack a particular group of people. Some of the posts and comments shared on Twitter reflected and even amplified hate speech that stigmatised and discriminated against the Asian community in contemporary society, especially against China and the Chinese.


In 2020, former US President Donald Trump posted inflammatory tweets from an officially verified Twitter account associating COVID-19 with Chinesevirus (Reja, 2021). Due to the celebrity effect, Donald Trump’s tweets carry a lot of clouts and there has been an increase in hate crimes against Asians following the release of the tweets. Also, the hashtag #Chinesevirus appeared in Twitter. Users typically use hashtags to show agreement and solidarity and do not usually add hashtags to representations they find unpleasant. Therefore, Twitter’s tweets and hashtags can detect changes in attitudes that lead to the formation of popular opinion, including hatred of specific groups (Yulinet al., 2021).

Figure2. Donald Trump’s tweets about COVID-19
Sources: Twitter

In the analysis of users using the #chinesevirus hashtag, it was found that half of the users used #chinesevirus in conjunction with the anti-Asian hashtag, while only 20% used #COVID-19 to refer to the prevalent disease (Hart, 2021). Moreover, according to Kurtzman (2021) there is a discrepancy between the presence of #COVID-19 in tweets and the presence of #Chinesevirus in tweets with the anti-Asian sentiment. Of the nearly 500,000 hashtags with #COVID-19, about a fifth of users’ tweets showed hateful emotions, but anti-Asian hate speech was evident in half of the more than 775,000 hashtags with #Chinesevirus (Kurtzman, 2021).

The terminology of a specific group term to describe a disease can lead to the stigmatization and perpetuation of this type of group, and Trump’s tweet incited an increase in anti-Asian terminology and sentiment on Twitter after he posted it (Hart, 2021). And in the wake of the tweets, there has been a surge in racism and violence against Asians across the US, which many blame on the hate speech used by politicians (Hart, 2021).  

The harm caused by hate speech 

The trend of Chinesevirius hate speech on Twitter has fuelled an anti-Asian atmosphere, and the culture of racism against Asians has spread like a flu through Twitter. Asians and people of Asian descent around the world have been subjected to attacks, violent bullying, threats, racist abuse, and discrimination related to COVID-19 due to stigmatisation in media platforms (Human Rights Watch, 2020). In the US, 1,497 reports of anti-Asian hate crimes were received against not only Chinese but also Filipinos, Thais, and Korean Americans. Trump’s hate speech around according to Chinesevirius has instigated a rise in anti-Asian content on Twitter and could perpetuate racist attitudes on Twitter (Jae Yeon & Aniket, 2021).

Why does hate speech exist in Twitter?

Social Media Platform Features

Figure3. (Mukundan, 2020)

Many hate crimes against racial, ethnic, and backward communities are directly linked to hate speech spread on Twitter and a lot of social media platforms. Twitter’s social media nature provides an unforeseen speed of information dissemination, which is made possible by the fact that the platform puts the power of content generation and freedom of expression in the hands of each and every user (Sarah et al., 2020). Social media platforms like Twitter, which mediates most current online socialization and creativity, are also tools used for pro-social and anti-social purposes (Ariadna, 2017). For example, the campaign was organised around the #BLM hashtag, which aims to raise the voice of the black community. However, as mentioned earlier, Twitter is also a conduit for hate speech and harassment, including racial and gender discrimination. Such as, miscreants use Twitter tags and tweets to insert abusive comments about transgender people and use jokes to cover up some of the discrimination and prejudice (Ariadna, 2017).

Definitions of hate speech on different social media platforms

Different social media sites give disparate judgments on hate speech, which also affects the probability of hate speech appearing on each media platform.

– Youtube states in relation to hate speech: The platform encourages freedom of expression and defends the right to freedom of expression. However, hate speech that is offensive and demeaning to a group of people is not allowed (Natalie, 2019).

– Facebook refers directly to the deletion of hate speech that is discriminatory or offensive. Besides, the platform will prevent the emergence of organisations and people committed to inciting hatred against a particular group (Natalie, 2019).

– Twitter does not particularly mention initiatives to ban hate speech, but instead warns about potentially uncomfortable, harmful, and false content. Most importantly, the terms of Twitter state that “the platform may not monitor or control the content posted through the service and is not responsible for such content” (Natalie, 2019).

Thus, we can find that the platforms’ determination of hate speech, regulation, and accountability influence the existence of hate speech to a certain extent and give unscrupulous people the opportunity to seize the opportunity to post hate speech.

Sensitive media filter in Twitter 

The sensitive media filter is a warning label for Twitter, mainly for potentially sensitive content, like violence or pornography. While Twitter’s sensitive media filter does prohibit excessively gory and sexually explicit media and some illegal content, content can appear as long as it does not touch on a specific sensitive medium (Hoffman, 2022). According to Ariadna (2017), Twitter’s sensitive media filter is used to disguise hate speech by enabling users to apply a sensitive media filter to the content they want to post. The Sensitive media filter lets users know before they view content that may be sensitive to their media audience so that they can hide hate speech or avoid flagging through this filter (Ariadna, 2017). Such as some people post pictures of apes on Twitter with humorous tweets to cover up discrimination and abuse against certain races.

How social media platforms can better govern hate speech – based on Twitter

Although, according to Flew (2021), it is difficult for media platforms to balance the need to promote freedom of expression and minimise censorship and thus retain audiences, on the one hand, and effective legal sanctions against hate speech and online abuses, on the other. However, it is imperative that the harm caused by hate speech on social media platforms is eliminated.

  • For platforms, the tools to regulate hate speech on the platform should be strengthened, rather than left entirely to the users themselves; Twitter allows users to object when faced with hate or uncomfortable speech by providing a “dislike” button on the platform. However, the “dislike” button may also lead to algorithmically generated messages that run counter to the interests of the platform’s advertisers and may exacerbate racist behavior (Ariadna, 2017). Algorithms can be a key players in the dynamics of regulatory platforms. A database of tweets classifies hate speech and speech that makes most users uncomfortable and builds a metadata structure. This architecture allows the algorithm to be more diverse when reviewing the content on the platform and not be confused by humorous hate speech (Femi Emmanuel et al., 2020).
  • For users, it is essential that they are more aware of hate speech and that they remain unaffected by it. It is even more imperative for users to maintain a sense of ethics when using social media platforms. Faced with the anonymity and the right to freedom of expression that internet platforms offer users, it is all the more necessary for them to use this right soberly.
  • For the government, it is crucial to adopt a more effective approach to the governance of hate speech. The European Union has a low tolerance for hate speech and takes a strong stand against it. The primary measures taken by the EU are platform-based Notice-and-take-down liability. In the face of hate speech the government requires 24-hour removal, and this measure effectively prohibits content that incites violence and hatred from being distributed on media platforms (Femi Emmanuel et al., 2020).


Ayo, F. E., Folorunso, O., Ibharalu, F. T., & Osinuga, I. A. (2020). Machine learning techniques for hate speech classification of twitter data: State-of-the-art, future challenges and research directions. Computer Science Review, 38, 100311–.

Alkiviadou, N. (2019). Hate speech on social media networks: Towards a regulatory framework?. Information & Communications Technology Law, 28(1), 19–35.

Chris, H. (2022, October 28). How to Unblock “Potentially Sensitive Content” on Twitter. How-To Geek.

Enarsson, T., & Lindgren, S. (2019). Free speech or hate speech? A legal analysis of the discourse about Roma on twitter. Information & Communications Technology Law, 28(1), 1–18.

Flew, T. (2021). Regulating platforms. Polity Press.

Human Rights Watch. (2020, May 12). Covid-19 Fueling Anti-Asian Racism and Xenophobia Worldwide. Human Rights Watch.

Hswen, Y., Xu, X., Hing, A., Hawkins, J. B., Brownstein, J. S., & Gee, G. C. (2021). Association of “#covid19” Versus “#chinesevirus” With Anti-Asian Sentiments on Twitter: March 9-23, 2020. American Journal of Public Health (1971), 111(5), 956–964.

Konikoff, D. (2021). Gatekeepers of toxicity: Reconceptualizing Twitter’s abuse and hate speech policies. Policy and Internet, 13(4), 502–521.

Kim, J. Y., & Kesari, A. (2021). Misinformation and Hate Speech: The Case of Anti-Asian Hate Speech During the COVID-19 Pandemic. Journal of Online Trust and Safety, 1(1).

Laura, K. (2021, March 18). Trump’s ‘Chinese Virus’ Tweet Linked to Rise of Anti-Asian Hashtags on Twitter. UCSF.

Pereira-Kohatsu, J. C., Quijano-Sánchez, L., Liberatore, F., & Camacho-Collados, M. (2019). Detecting and Monitoring Hate Speech in Twitter. Sensors (Basel, Switzerland), 19(21), 4654–.

Matamoros-Fernández, A. (2017). Platformed racism: the mediation and circulation of an Australian race-based controversy on Twitter, Facebook and YouTube. Information, Communication & Society, 20(6), 930–946.

Mishal, R. (2021, March 19). Trump’s ‘Chinese Virus’ tweet helped lead to rise in racist anti-Asian Twitter content: Study. abc News.

Masud, S., Dutta, S., Makkar, S., Jain, C., Goyal, V., Das, A., & Chakraborty, T. (2020). Hate is the New Infodemic: A Topic-aware Modeling of Hate Speech Diffusion on Twitter.

Robert, H. (2021, March 19). Trump’s ‘Chinese Virus’ Tweet Helped Fuel Anti-Asian Hate On Twitter, Study Finds. Forbes.

The Washington Post. (2020, April 30). Twitter sees record number of users during pandemic, but advertising sales slow. The Washington Post.

Be the first to comment

Leave a Reply