The devil hiding in the screen: The harm caused by hate speech and online harm to Chinese society during the COVID-19 epidemic

Introduction

In the second decade of 21st century society, digital platforms have already become an indispensable part of most people’s daily lives. Almost everyone has enjoyed the convenience that digital platforms bring to their lives. But it seems that in the dark corners that everyone is accustomed to ignoring, hate speech and online harm are slowly destroying some people. Hate speech is extremely contagious, and its consequences are long-lasting. The ubiquity of this problem has been exacerbated by the instant reach of social media (Siegel, 2020). The COVID-19 crisis that spread across the world since the end of 2019 seems to have been gradually forgotten now, but the spread of hate speech and online harm that occurred on Chinese digital media during this period are still scars that most Chinese people dare not reveal. This blog aims to explore how online interactions during the COVID-19 pandemic amplify divisions, shape, and harm public viewing. The focus is on the multifaceted impact of hate speech in China and reveals the interplay between digital platforms, social responses, and the users behind the screens.

What is hate speech and online harm?

According to Flew (2021), hate speech is the expression, encouragement, and incitement of hatred by some people against others based on differences in characteristics such as race, ethnicity, religion, nationality, and sexual orientation. Online harm is broadly defined to include a range of negative experiences that users may encounter online. Based on the rise of social media in recent years, hate speech has been unintentionally spread based on platform algorithms, and its harmful effects have also been gradually amplified on social media platforms. Online harm includes not only hate speech but also other forms of harmful content such as misinformation, cyberbullying, and child exploitation (Woods & Perrin, 2022).

However, such statements often fly under the banner of free speech. How to balance free speech and hate speech is even more delicate and difficult. The value of free speech in a democratic society should not be the theoretical basis for the free spread of violence online (O’Regan & Theil, 2020). Because hate speech by its very nature has a damaging impact on democratic values and individual well-being. Therefore, governments need to intervene as necessary and digital platforms should take critical measures to regulate and punish them (Siegel, 2020).

https://www.nytimes.com/zh-hans/2023/06/08/world/asia/china-online-trolls.html

It has been more than four years since the first outbreak of COVID-19 in Hubei Province, China. During these four years, the global health crisis has become a priority issue in people’s daily lives. As the country of origin, the rapid spread of an emerging virus on a large scale has given Chinese people unprecedented fear. All the negative sentiments were immediately expressed on Chinese social media platforms.

As the contemporary information battlefield, these social media provide citizens with a source of the latest news and necessary community support (Jin & Tay, 2022). But with the influx of information, the fake news and hate speech mixed in seem to have exceeded the platform’s supervision and continue to ferment in user discussions. For a time, the number of confirmed cases became a sharp sword piercing the severely affected areas. Until the Chinese government gave up the “zero-covid policy” at the end of 2022, no one from Wuhan residents to Shanghai residents was spared from online harm.

Although during this period, the Chinese government required social platforms to take the most stringent measures to control speech and the flow of information. But much online interaction has long since turned into a game of blame and name-calling. In the face of the huge number of people online, managing hate speech and misinformation has become a major challenge in maintaining the online environment.

Online communication in China: background and particularities

For a long time, the Chinese government has strictly controlled overseas social media and websites, blocking the world’s mainstream social platforms so that residents cannot legally use them in China. Therefore, China’s own social platforms such as WeChat, Weibo and Douyin have become dominant tools for Chinese people’s daily online communication. They not only meet the basic functions of social platforms, but also expand functions such as e-commerce and personal financial services. Furthermore, the role of social media as an important public sphere in contemporary China allows citizens to engage in social criticism and mobilize against perceived injustices by authorities (Jin & Tay, 2022). On the other hand, government departments can directly intervene in the management of social media at an appropriate time, including by publishing policies, providing official information, and providing guidance on public opinion supervision.

The important role social media plays in shaping public opinion and influencing ideologies, especially during critical times like the COVID-19 pandemic (Liu, 2020). For China at that time, almost everyone used social media every day to check the number of confirmed cases in the region, nucleic acid testing requirements and regional closure decisions.

https://www.hicom-asia.com/chinese-social-media-platforms/

Manifestations of hate speech during lockdown

For residents in closed areas across the country, the desire for resources and freedom brought about by the continued house arrest-like home isolation has all led to anxiety and depression. In every turbulent environment, human nature will try to find a murderer or scapegoat to vent its emotions by making hateful remarks about it. At first, some people turned their attention to the government and relevant departments. According to research by Fei Yan (2020), the Chinese government’s initial lack of transparency about the severity of the COVID-19 outbreak led to public anger, especially after Wuhan announced a lockdown. During this period, dissatisfaction was evident on social media. Then these bad emotions turned into harm for specific groups, and this was obviously not a fair and just discussion. Some migrant workers from other provinces, front-line medical staff, and even asymptomatic infected people have become the first targets of attack. They were accused of spreading the virus and were criticized by many users and even had their personal information published. For them, who are also in a social crisis, this is undoubtedly a fatal double blow.

https://www.plantemoran.com/explore-our-thinking/insight/2022/05/shanghais-covid-19-lockdown-and-its-impact-on-the-supply-chain

As a country that has implemented a zero-covid policy for two years, the confirmed data has become a rigid indicator. This data correlates with the freedom of individuals, communities and even cities, and its importance is what drives some online users to become demons. A noteworthy example is that during the lockdown in Shanghai, many communities established owner groups on WeChat. The original purpose was to facilitate the interconnection between property managers and owners. However, during the epidemic, whether there are positive cases becomes the key to whether a community is locked down. Whenever the property managers notify positive confirmed residents in owner chat groups, many accusations and even hateful comments from other residents will appear in them. Therefore, test-positive people in many communities committed suicide by jumping off buildings because they could not accept the attacks online. Looking across the country, there are not a few people who cannot accept online violence and commit suicide.

https://www.voachinese.com/a/china-covid-cases-edge-highter-as-xian-steps-up-curbs-20211227/6371237.html

Responses and measures from governments and platforms

Based on the large number of extreme remarks that continued to appear on social platforms at that time, the Chinese authorities and social platforms quickly took a series of measures. First, the government urged relevant departments to provide direct guidance to the platforms and supervise the platforms to be more detailed in supervising online speech. The Cyberspace Administration of China quickly promulgated information security laws (Lu et al., 2021). Online influencers and individuals who spread false information or even incite hate speech will be punished with bans, account closures and even administrative detention. Although these actions are aimed at maintaining social stability, they also further reduce the space for regular discussions about the virus.

It is necessary for the platform to regulate speech, which can largely solve the negative harm caused by user interactions as the platform becomes more popular (Woods & Perrin, 2022). In addition, based on their algorithmic structure, social platforms can unintentionally amplify hate speech (Siegel, 2020). Therefore, how to improve the algorithm as quickly as possible was a common technical problem faced by every Chinese social platform at that time. As China’s mainstream social platforms, WeChat and Douyin also updated their algorithms immediately. Their databases will reduce their promotion after identifying sensitive words for hate speech. And a large number of content moderators have been recruited to assist the system in detecting harmful content. Immediately, other social platforms such as Xiaohongshu and Weibo also followed up with similar decisions. Once the publisher’s copy contains words such as “epidemic,” “lockdown,” and “virus,” the system will directly hand it over to a random auditor to judge the content. The content can be published successfully if it does not involve false information or hate speech; otherwise, it cannot be published. The job of a content moderator is complex, not only to judge content without going beyond censorship but also to weigh the boundaries between harmful content and free speech (Roberts, 2019). Therefore, many people do not think that manual judgment of content is a fair thing, because the platform does not provide specific details of the judgment. The government and platforms have continued to impose strong supervision on online speech since then, which has aroused people’s endless concerns about their privacy rights and freedom of speech.

http://www.scio.gov.cn/ztk/dtzt/42313/43142/index.htm

Impact and improve

The impact on society and individuals of hate speech and online harm that has arisen since the pandemic has been profound and difficult to erase. A large amount of hate speech first attacks people’s psychological defenses. The specific group that has been slandered has a sense of separation from the public, resulting in extreme anxiety and negative emotions of isolation (Yan, 2020). It also forces them to take extreme measures to fight, and these psychological pressures may be more harmful in the long term than the virus itself. In addition, the social divisions caused by hate speech cannot be easily healed, causing the social trust that has been established to be broken down. People’s inability to be more united in the face of social crises also makes reconstruction in the post-epidemic era more difficult.

Although the Chinese government’s zero-covid policy continues to be criticized, its efforts to reduce hate speech and online harm during the lockdown now appear to be effective. In the post-epidemic era, although the Chinese government has slightly relaxed its control over online speech, it is pursuing further development in the control of false information and hate speech. Not only has a series of official documents been released to enhance citizens’ digital literacy, but also to inform ways to identify whether information is reliable or unreliable. It also develops a more humane approach to censorship, promoting the platform to revise its algorithm to be more accurate and empathetic. In addition, offline publicity is also conducted in communities and schools, aiming to further cultivate citizens’ vigilance and resistance to online harm.

https://www.zz5.net/news/14574

Conclusion

Everything has two sides. As the dark side of online interaction, hate speech and online harm have a bad impact and significant damage from the personal level to the social level. As one of the countries hardest hit by COVID-19, China has learned lessons from the beginning to the end that are worth reflecting on and learning from. Even strong regulation like China’s to protect the online environment is not a perfect measure. But in specific social crises, this is indeed the most direct and effective way to deal with hate speech.

Now that we have gradually emerged from the haze caused by the virus, whether social platforms continue to innovate in technological development and supervision of speech is a question we should all think about further. Ensuring a healthy network environment, achieving fair online communication, and protecting individuals and collectives from network harm is a common mission for all mankind in the future.

Reference List

Flew, T. (2021). Issues of Concern. In Regulating platforms (pp. 91–96). chapter, Polity.

Jin, Y., & Tay, D. (2022). Offensive, hateful comment: A networked discourse practice of blame and petition for Justice during COVID-19 on Chinese weibo. Discourse Studies, 25(1), 3–24. https://doi.org/10.1177/14614456221129485

Liu, Z. (2020). Hate speech and its harm to China during the covid-19 epidemic -taking the interview program on YouTube as an example. e-repositori.upf. https://repositori.upf.edu/handle/10230/48043

Lu, Y., Pan, J., & Xu, Y. (2021). Public sentiment on Chinese social media during the emergence of covid19. Journal of Quantitative Description: Digital Media, 1. https://doi.org/10.51685/jqd.2021.013

O’Regan, C., & Theil, S. (2020). Hate speech regulation on social media: An intractable contemporary challenge. Research Outreach. https://researchoutreach.org/articles/hate-speech-regulation-social-media-intractable-contemporary-challenge/

Roberts, S. T. (2019). Behind the screen : Content moderation in the shadows of social media. Yale University Press.

Siegel, A. A. (2020). Online Hate Speech. In N. Persily & J. A. Tucker (Eds.), Social Media and Democracy (pp. 56–88). chapter, Cambridge: Cambridge University Press.

Woods, L., & Perrin, W. (2022). 5 – Obliging Platforms to Accept a Duty of Care . In Regulating big tech: Policy responses to digital dominance (1st ed., pp. 93–109). chapter, Oxford University Press.

Yan, F. (2020). Managing ‘digital china’ during the COVID-19 pandemic: Nationalist stimulation and its backlash. Postdigital Science and Education, 2(3), 639–644. https://doi.org/10.1007/s42438-020-00181-w

Be the first to comment

Leave a Reply