‘Chinese Virus’: How a derogatory tweet changed the course of Asian people’s lives in 2020

SID: 480239602 - Natalie Moussa

With the turn of the new century and the introduction of social media platforms, the world has had to deal with problems that they would have never thought of as early as thirty years ago. With the advantages of globalisation, instant messaging, being able to know what your friends are up to on any given day, and current affairs from any place in the world, it comes with consequences – after all, not everything can be good. One of the main issues with the internet and social media platforms is the hate speech and online harassment.

When we were kids and someone bullied you at school, you could go home and decompress. Maybe play your favourite videogame or read the newest Percy Jackson book. Now, the bullying and harassment is nonstop. A fifteen-year-old kid could get bullied at school during the day, goes home, and the bullying continues on his social media accounts and instant messaging. They do not have any reprieve or peace of mind. What makes online harassment more dangerous is the fact that the internet is often a place of refuge where the new generations go into to escape the real world. When their safe haven is ‘corrupted’ and ‘broken into’, they believe they have no other place to go to. They can keep blocking these users, but they cannot unsee or unread all the hateful comments they keep receiving. The nature of online harassment is anonymous; where one knows who bullies them in person – let’s say a couple of kids at school –, they do not know who is behind the screen, making the bullied hypervigilant and unable to trust their environment. I used a school scenario as an example, but adults are not safe from online harassment and speech harm. Throughout this post I will be talking about platformisation and how platforms helps these situations by enabling hate speech and harassment on their interface, mainly focusing on the role of Facebook and some of the recent events including the hate speech towards Asian people during the height of COVID-19, which was the catalyst for the platforms to take serious action against online harassment.

What is Hate Speech?

‘Hate speech’ covers “many forms of expressions which advocate, incite, promote or justify hatred, violence and discrimination against a person or group of persons for a variety of reasons” (European Comission against Racism and Intolerance (ECRI), 2023). According to Gelber (2019; as cited in Sinpeng, Martin, Gelber & Shields, 2021), in order for a speech to cause harm it needs to (1) take place in ‘public’, (2) needs to be directed at a member of a systematically marginalised group in the social and political context in which the speech occurs, (3) the speech needs to be an act of subordination that interpolates structural inequality into the context of which the speech takes place, among other guidelines (p. 11). Speech that can be that harmful needs to be regulated by the platforms that occur within, since they are allowing these to happen in their own interface.

Let’s talk statistics

According to the Pew Research Center, forty one percent of Americans have experienced some sort of online harassment, seventy five percent occurring in social media platforms. Seventy nine percent believe that these platforms are not doing enough to moderate and addressing the bullying and harassment happening on their platforms (Vogels, 2021). Would you say they are right? Has this been your experience?

How does this impact young people?

Over half of the teens surveyed felt angry after being cyberbullied, a third felt hurt, and fifteen percent felt scared. Two thirds of tween victims said it had a negative impact on how they felt about themselves, and a third confessed it affected their friendships while thirteen percent said it affected their physical health (Security.org, 2023). This will only further affect them as the exposure to online harassment and hate speech becomes more prevalent on these platforms. They could grow to become insensible to it, further damaging their mental health.

The case of COVID-19 and online speech harm

Unless you were living under a rock, 2020 was a year that it will not be easily forgotten. For some people it meant to stay home and try out cute baking recipes, while for others it enhanced the racism and discrimination they were already exposed to every day to the point that they feared for their lives.

Asian people were subjected to both physical harm and emotional harassment when COVID-19 began to be considered a pandemic. The virus originated in Wuhan, a city in China, and spread all over the world. Because of the nature of the virus and country of origin, an online narrative started to shape by blaming Chinese people for the virus and deaths of millions of people across the globe. It all started, however, with one simple Tweet from Donald Trump early in the pandemic, when he called the virus “Chinese virus”. Between the 12th of March and 1st of April 2020, there were forty-one million tweets related to COVID-19, only thirteen thousand expressed anti-Asian hate. This number increased significantly after Trump’s tweet. In fact, forty-eight hours after he first used the term, the number of anti-Asian hate tweets increased by 656 percent (Williams, 2021). In the same period, police in the UK recorded a twenty one percent rise in hate crime targeting south and east Asians.

This showed how what started as an online harm speech, it became a monster that had real life consequences. Asian people were getting assaulted in the streets everywhere across the world. The US Secretary Mike Pence referred COVID-19 as the “Wuhan virus”, continuing the narrative outside of social media platforms.

How did the platforms react?

Facebook, Twitter, and YouTube made the first changes in their community guidelines and terms of service in 2020. They are now flagging content on the accounts of public figures as fake news, have deleted posts, and even banned celebrities and politicians on their platforms (Perez, 2021). This all started because of the hate marginalised groups were getting during COVID-19, and it caused a snowball effect in other aspects of moderation and regulatory guidelines.

Zuckerberg’s back and forth on free speech

Mark Zuckerberg’s stand on what free speech in his platforms needs to look like was “I just believe strongly that Facebook shouldn’t be the arbiter of truth of everything that people say online” (Perez, 2021). Meta, Twitter, and YouTube simply believed their job was to not regulate what it is said in their platforms. Zuckerberg also stated back in 2020, in the amidst of the Black Lives Matter riots that he did not think banning Donald Trump was the way to go: “I disagree strongly with how the President spoke about this, but I believe people should be able to see this for themselves because ultimately accountability for those in positions of power can only happen when their speech is scrutinized out in the open” (Perez, 2021). Apparently, Facebook drew the line after the Capitol riots in the United States, where following the defeat of Trump in the 2020 presidential elections, more than two thousand of his supporters attacked the building, vandalising and looting their way in, five people losing their lives and much more injured and traumatised.

After this event, Facebook banned Trump on all of their platforms, however, many people believe it was too little too late since they believed that the only reason why Facebook did not ban him earlier was because of the engagement he was receiving on the platform amid the declining user base (Pengelly & Paul, 2023). They are a dying platform and were trying to stay relevant, so having a controversial public figure on its platform helped its revenue. After all, there is no such thing as bad publicity, right?

How can platforms move forward?

Sinpeng, Martin, Gelber & Shields (2021) believe Facebook should take more serious action, including (1) making transparent the types and weight of evidence needed to take action on hate figures and groups, to assist when collecting incriminating information, (2) make public a trusted partner in each country where Facebook is available so that individuals and organisations have a person they can go to when reporting issues or a crisis, (3) conduct an annual APAC roundtable on hate speech involving key non-governmental stakeholders from the protected groups in all countries (p. 21).

Platforms are now working with the insinuation that most if not all of their users perform a ‘duty of care’, so they are relying on their users to report speech hate and harassment in order for them to take action, this could become a lengthy and slow process when dealing with the fast pacing online world. Why would the platforms shift the responsibility to their users when they should be the ones enforcing their regulations to make their user experience safer for everyone? Is it more about protecting themselves from a lawsuit by doing so? How could you ask the users (who are basically your customers) to do the work for you? What Moore & Tambini (2022) suggest is for the platform to hire an independent regulator who would be bound by the Human Rights Act. By having trained specialists as moderators overseeing operations, fewer reports of harassment and hate speech will fall through the cracks.

Moore & Tambini (2022) also believe that regulation should not focus on just the content but on the design of the service. The platforms should devote their time to user complaints and safety, by working on their business model. They must develop an algorithm where it catches all malicious posts as soon as it gets posted. But, how will the algorithm understand the context within the words? Suppose someone tweets “I hate pineapple pizza”. Now, ‘hate’ would be a word that could get flagged, but nobody’s feelings would get hurt by that comment (except maybe someone who religiously eats pineapple pizza). How could the algorithm know the difference between a life-threatening comment and a distaste for fruit on a pizza? Platforms need to work with engineers to ensure their interface is on par with what the users need from them.

Overall, hate speech and harassment have been skyrocketing as the internet becomes more and more accessible. This causes emotional and psychological harm to the user, which has significant lasting trauma by making the victim hypervigilant about their environment. Platform regulations still have a long way to go, what started as something that brought the whole world together has continuously made people wonder how life would look like (and how would kids grow up) without the constant negativity and hate speech they are subjected to every day they are on any social media platform.

Bibliography

European Comission against Racism and Intolerance (ECRI). (2023). Hate speech and violence. Retrieved from Council of Europe: https://www.coe.int/en/web/european-commission-against-racism-and-intolerance/hate-speech-and-violence

Moore, M., & Tambini, D. (2022). 5 – Obliging Platforms to Accept a Duty of Care. In M. Moore, & D. Tambini, Regulating big tech: Policy responses to digital dominance (pp. 93-109). New York: Oxford University Press.

Pengelly, M., & Paul, K. (2023, 01 27). Trump’s return to Facebook will ‘fan the flames of hatred’, say experts and politicians. Retrieved from The Guardian: https://www.theguardian.com/us-news/2023/jan/26/trump-facebook-return-reaction-democrats

Perez, A. L. (2021). The “Hate Speech” Policies of Major Platforms. Montevideo, Uruguay: United Nations Educational, Scientific and Cultural Organization (UNESCO).

Security.org. (2023, 01 26). Cyberbullying: Twenty Crucial Statistics for 2023. Retrieved from Security.org: https://www.security.org/resources/cyberbullying-facts-statistics/

Sinpeng, A., Martin, F., Gelber, K., & Shields, K. (2021). Facebook: Regulating Hate Speech in the Asia Pacific. Dept. Media and Communications, The University of Sydney, and The School of Political Science and International Studies, The University of Queensland.

Vogels, E. A. (2021, 01 13). The State of Online Harassment. Retrieved from Pew Research Center: https://www.pewresearch.org/internet/2021/01/13/the-state-of-online-harassment/

Williams, M. (2021, 03 29). COVID-19 political commentary linked to online hate crime. Retrieved from Cardiff University: https://www.cardiff.ac.uk/news/view/2510296-covid-19-political-commentary-linked-to-online-hate-crime

Be the first to comment

Leave a Reply