What we are worried about
To date, there are approximately 4 billion Internet users worldwide. Online platforms have become important venues for communication, connection, and freedom of expression. Meanwhile, the incidence of online hate speech is rising rapidly across the globe, interlinked with misinformation and extremist political material. (Sinpeng et al., 2021) In the past few years, due to the COVID-19 pandemic, racial discrimination against Chinese and Asian communities has grown in many Western countries. Also, hate cyber-attacks have increased dramatically, triggering a public crisis and posing challenges for Internet operators.
The Roots of Hatred: How discrimination arises
The historical roots of Sinophobia can be traced back to the First Opium War (1839-1842), and the negative portrayal of ancient China and the Chinese people as uncivilized, inferior second-class citizens and cheap labor. (Sakki & Castrén, 2022) At that time, anti-Chinese rhetoric described the Chinese as opium addicts and gambling addicts. Furthermore, in such rhetoric, Chinese men were depicted as dangerous elements, accused of distributing candy and gifts laced with opium to young girls before luring them. Since then, the Chinese have been derogatorily described as the “Yellow Peril” and the “sick man of Asia”. (Sakki & Castrén, 2022)
A painting describing “Yellow Peril” (黃禍)
It is generally agreed that Coronavirus was first detected in Wuhan, China in December 2019, and the outbreak subsequently began to spread globally. Although in 2015, the World Health Organization (WHO) established a set of guidelines, prohibiting labeling new infectious diseases based on factors such as geographic location, individuals, culture, or racial population in official names (Sachs, 2020), in a press conference in March 2020, former US President Donald Trump still referred to the virus as “Chinese virus” or “China virus.” He repeatedly defended his use of these terms throughout March and early April, arguing that they were not racist but simply statements of reality. (Gover et al., 2020) Fueled by Trump’s comments, other political officials and conservative journalists also used derogatory language to accuse China. For instance, Senator John Cornyn claimed that China was responsible for the spread of COVID-19 because of their “culture where people eat bats, snakes, dogs, and things like that.” (Gover et al., 2020)
The use of terms like “Chinese virus” by the media and political leaders is unlikely to alter a person’s beliefs or attitudes. However, it has the potential to evoke negative stereotypes, intensify prejudice, and even incite acts of hatred. (Bushman, n.d.) Around the world, verbal and physical attacks have been made against Chinese citizens and people of Asian descent. In Australia, an elderly Asian man fainted, and no one helped him out of fear that he was infected. He died instantly. In the US, an Asian-American woman was badly burned by chemicals while taking out the rubbish in Brooklyn. (Sachs, 2020) Meanwhile, the rhetoric of the “Yellow Peril” has once again taken off, with Asians once again being labeled filthy compared to animals and told to “go back to where they came from” (Sachs, 2020)
Uncovering the Truth: the relationship between Internet and hate speech
During the pandemic, many of the world were in physical isolation. People had become increasingly reliant on the Internet to get information, exchange opinions and express attitudes. But at the same time, it has also become a breeding ground for the spread of anti-Asian sentiment. Hate speech on the Internet generally takes the following three forms.
Manifestations of hate speech on the Internet （Hackett, 2021）
Slur/tropes： The most common form of online hate speech is the use of slurs and tropes. The perpetrators of hate speech use racist, sexist, and homophobic terms to attack others. After the start of the pandemic, new defamation, particularly against the Asian community, began to circulate.
Threats of violence: This is the second most common type of online hate speech. Threats of violence or discussions about threats of violence are most likely to occur on a forum
Images: Hate images and symbols drove the least amount of online discussion compared to other formats, but each saw the biggest increase since the start of the pandemic (+ 28%). （Hackett, 2021）
According to a study of social media data that investigated more than 263 million conversations, there have been 5.5 million cases or discussions of online hate speech against the Asian community since 2019. In the first year of the pandemic, online hate speech against Asians has increased by a staggering 2,770 %. Liam Hackett, chief executive of youth charity Ditch the Label, which commissioned the study, said: “Online hate speech has reached an unprecedented peak and for some communities has reached unbearable extremes.” (Hackett, 2021 & CGTN, 2021)
Dangers of virality: how TikTok memes amplify dehumanization
Dehumanization includes the devaluation of groups by equating them with culturally despised subhuman entities such as pigs, rats, monkeys, or even germs or dirt/dust. (Bahador, 2020) As an expression of emotion, dehumanizing discourse appears in ethnic jokes and hate speech. When jokes about a particular group are repeated and shared, they can turn these stereotypes into the essential characteristics of those groups， thereby influencing the way others view these groups. (Sakki & Castrén, 2022)
TikTok has become very popular among generations. With its “Use this voice” feature, it allows users to mix audio from other videos, making it a unique platform to examine racist stereotypes. (Rodriguez, 2022) According to research from the early stages of the pandemic, the TikTok videos tagged with # coronavirus, # China, # chinacoronavirus, # Wuhan, and other related keywords have shown frequent examples of racist humor. Among these videos were “Yellow Peril” memes. They depict individuals or items as being “infected” with the coronavirus, due to their association with China or other Asian nations. (Rodriguez, 2022) In these samples, three main memetic trends were significant:
- A short play showing Asian people and citing them as the cause of coronavirus transmission. For example, A man is dancing and the text on the screen reads: “While you are eating at your favorite Chinese restaurant, you hear a cough coming from the kitchen”, and the camera closes in on his concerned face. (19000 likes)
- When people receive parcels or goods coming from China, they react with fear or disgust. For example, someone excitedly opens the front door to see that a parcel has arrived, but when he sees it’s from China, he becomes anxious and puts on rubber gloves to disinfect the parcel. (18000 likes)
- Skit blaming Chinese wildlife consumption for coronavirus outbreak. For example, A woman with an Australian accent filmed a bat flying in the sky. The message to Australians is…” she says. … Don’t fucking eat them.” （40000 likes） (Matamoros-Fernandez et al., 2022)
In addition, researchers have also discovered a form of “digital yellowface”. In these videos, users use TikTok’s “use this voice” feature to imitate an Asian accent in English or say “words that sound like an Asian accent” in gibberish, or say “Subaru” (Japanese car brand) in an exaggerated manner. Moreover, some users dramatized their facial expressions to further reflect the aggressive caricature they were trying to portray. (Rodriguez, 2022)
These videos take a light-hearted, witty, and humorous stance to reinforce negative stereotypes. Through the dehumanizing metaphor of the “Yellow Peril”, they evoke “an image of Asians as savages, ruthless, immoral and inferior”， thereby creating a serious social impact. (Matamoros-Fernandez et al., 2022)
Stopping hate speech: what can platforms do
Content moderation as a “governance mechanism” is used in a variety of contexts to build community engagement for cooperation and civility. Automated content moderation using machine learning and computer algorithms is a common practice in social media, and it typically involves minimal human intervention. (Baker et al., 2020) In response to comments about the coronavirus, the major social media platforms have increased their moderation efforts. For example, Twitter has expanded its definition of online harm to address “content that directly contradicts guidance from authoritative sources of global and local public health information”. Also, Facebook and Instagram announced that they will remove misinformation related to the 2019 coronavirus disease that could cause “imminent personal harm”. (Baker et al., 2020) However, due to several reasons, these measures don’t seem to be working as well, and hate speech is still rampant on these platforms. Firstly, many popular websites receive a vast and increasing amount of user-generated content. However, the software or algorithm alone is not capable enough to classify it to determine what is acceptable, even without considering the scale of the issue. (Roberts, 2019) Secondly, the moderation of social media platforms is often ideologically motivated and politically biased. Because content moderation practices are developed from the perspective of individual users, they necessarily arbitrate between competing – and often incompatible – political interests and values. This happens at the individual, national and international levels. (Baker et al., 2020) For example, Sweden and New Zealand had contrasting approaches to dealing with COVID-19 at the outset, where Sweden relied on “herd immunity” and New Zealand implemented elimination strategies. Both used scientific evidence to arrive at very different government policies. (Baker et al., 2020) In addition, some people see “Black Lives Matter” as demeaning to others. Others say that “All Lives Matter” is racist because it is insensitive to those whose lives are in danger. These are all subjective opinions, and social media itself does not have the right to interpret whether speech is justified or not.
While it is not realistic to completely eliminate hate speech online, there are still some remedies available to platforms:
- Educating users by tagging suspicious or potentially harmful content
- Reducing the visibility of hate content through algorithmic downgrading
- Restricting the ability to participate in content that may cause harm. (Rodriguez, 2022)
Fighting back: #StopAsianHate movement
In the spring of 2020, several Asian studies scholars from the United States have begun to notice a worrying trend. Increasingly, they were hearing about the surge in anti-Asian incidents from friends, colleagues and news reports. In response, they came together to launch the Stop AAPI Hate website (AAPI refers to Asian American and Pacific Islanders), where people can submit hate incidents they have experienced. (Zhou, 2022) At first, however, this movement grew slowly. It was not until March 2021, when a mass shooter in suburban Atlanta targeted three Asian-owned businesses, killing eight people, including six Asian women, that the call for anti-Asian prejudice in American society reached a boiling point. In the following weeks, rallies occurred in over 50 cities and hundreds of thousands of people participated in various activities such as training, petitions, and crowdfunding campaigns to support victims and condemn anti-Asian violence. Meanwhile, on social media platforms, hashtags like # StopAsianHate and # StopAAPIHate started to become popular. As tracked by social media analytics firm Zignal Labs, the #StopAsianHate and #StopAAPIHate hashtags were used more than 8.4 million and 2.5 million times respectively on Twitter in 2021. Many public figures have also come forward to express their support for the campaign on social media (Zhou, 2022)
Artwork by Amanda Phingbodhipakkiya source: time
Because of the influence of the movement, it also drove a number of policy wins. In May 2021, US President Biden signed a bill to combat hate crimes related to the COVID-19 pandemic, especially against Asian Americans. This legislation makes it easier to report hate crimes by increasing public outreach and providing online reporting resources in multiple languages. It also directs the Department of Justice to designate a chief to quickly review hate crimes related to the pandemic and allows state and local governments to create crime reduction programs to prevent and respond to hate crimes. (Sprunt, 2021)
A few summaries
Overall, the hatred of Asians is deeply rooted in some Western countries, led by the United States, due to the influence of colonialist ideology. And since the outbreak of the coronavirus disease, the media and political leaders have largely fueled civil hatred of Asians by promoting negative stereotypes and misinformation about the origins of the virus.
In this context, the role of social media is controversial. On the one hand, online platforms amplify the most emotional feedback, extreme rhetoric can thrive on them. through the Internet, hate speech is spreading much more efficiently than before. Due to the lack of clear control over humorous and satirical content on the platforms, these forms of memes and videos are more likely to generate racial stereotypes and contribute to social inequality. But on the other hand, social media has also helped Asian people to unite and fight against discrimination and hate speech, gaining great social influence and finally contributing to the government’s attention to hate issues.
In my opinion, the debate over the pros and cons of social media will continue. However, there is no doubt that as hate speech on the internet continues to increase, major platforms need to improve the way they operate in order to protect the freedom of expression of their users, while creating a safer online environment for minorities. Furthermore, as hate crimes and racism are prevalent across the globe, States should establish a common international framework to develop clear and consistent guidelines, in order to safeguard the rights of minorities.
Bahador, B. (2020, December 3). Classifying and Identifying the Intensity of Hate Speech. Items. https://items.ssrc.org/disinformation-democracy-and-conflict-prevention/classifying-and-identifying-the-intensity-of-hate-speech/
Baker, S. A., Wade, M., & Walsh, M. (2020). The challenges of responding to misinformation during a pandemic: content moderation and the limitations of the concept of harm. Media International Australia, 177(1), 103–107. https://doi.org/10.1177/1329878×20951301
Bushman, B. (n.d.). Calling the coronavirus the “Chinese virus” matters – research connects the label with racist bias. The Conversation. https://theconversation.com/calling-the-coronavirus-the-chinese-virus-matters-research-connects-the-label-with-racist-bias-176437
CGTN. (2021, November 16). Anti-Asian hate speech online surged 2,770% amid pandemic, report shows. https://newseu.cgtn.com/news/2021-11-16/Anti-Asian-hate-speech-surged-online-amid-pandemic-report-shows-15f77VauL2U/index.html
Gover, A. R., Harper, S. B., & Langton, L. (2020). Anti-Asian Hate Crime During the COVID-19 Pandemic: Exploring the Reproduction of Inequality. American Journal of Criminal Justice, 45(4), 647–667. https://doi.org/10.1007/s12103-020-09545-1
Hackett, L. (2021). Uncovered: Online Hate Speech in the Covid Era. Ditch the Label Youth Charity. https://www.brandwatch.com/reports/online-hate-speech/view/
Rodriguez, A. (2022, September 30). Not just a joke: we scoured TikTok for anti-Asian humour during the pandemic and found too many disappointing memes. The Conversation. https://theconversation.com/not-just-a-joke-we-scoured-tiktok-for-anti-asian-humour-during-the-pandemic-and-found-too-many-disappointing-memes-184166
Roberts, S. T. (2019). Behind the screen: Content moderation in the shadows of social media. Yale University Press.
Sprunt, B. (2021, May 20). Here’s What The New Hate Crimes Law Aims To Do As Attacks On Asian Americans Rise. NPR
Sakki, I., & Castrén, L. (2022). Dehumanization through humour and conspiracies in online hate towards Chinese people during the COVID‐19 pandemic. British Journal of Social Psychology, 61(4), 1418–1438. https://doi.org/10.1111/bjso.12543
Sachs, H. J. (2020, May 14). “Yellow peril” in the age of COVID-19. Humanity in Action. https://humanityinaction.org/knowledge_detail/article-usa-hannah-sachs-yellow-peril-in-the-age-of-covid-19/
Sinpeng, A., Martin, F., Gelber, K., & Shields, K. (2021, July 5). Facebook: Regulating hate speech in the Asia Pacific. Final Report to Facebook under the auspices of its Content Policy Research on Social Media Platforms Award. Dept of Media and Communication, University of Sydney and School of Political Science and International Studies, University of Queensland.
Zhou, L. (2022, March 15). What the Stop Asian Hate movement has achieved one year after the Atlanta shootings. Vox. https://www.vox.com/22820364/stop-asian-hate-movement-atlanta-shootings