Hate speech is a silencer gun in digital governance

1. Online hate fueled by stigmatization

Common means of online stigmatization include the use of labels, linking events or stereotypes to suggest stigmatization, highlighting and ignoring personal characteristics, and categorizing negative characteristics to specific groups (Bruns, 2020). Due to the rapid spread of information on the Internet, stigmatization serves as a catalyst for the spread of hate speech online. The COVID-19 pandemic first emerged in Wuhan, China, making it a target for stigmatization by the local community. From Wuhan to China and across Asia, these areas have faced significant social judgment for their association with the origins of infectious diseases.

In today’s interconnected environment, where information spreads quickly and widely, certain events can be rapidly amplified and disseminated, leading to public bias and misunderstanding of particular groups. In addition, different voices in online communities can influence each other, leading to stigmatization of otherwise normal voices and exacerbating conflict between different groups. Homonyms and pseudonyms are also common methods of stigmatization.

In the field of Internet communication, hot topics change faster and spread more widely. When negative events are repeatedly exposed on the Internet, netizens gain stigmatization cues from these events, resulting in an increasing negative social perception of mental illness or AIDS. Because of the speed and reach of social media platforms and the Internet itself, stigma often spreads quickly into cyberviolence. An information source consists of multiple nodes that can be removed from their original context and manipulated through amplification or distortion.

2. The accumulation of hatred and catalytic reactions in the  “post-truth era”

“Post-truth” refers to apathy or lack of awareness about science, high emotions in public debate, widespread rumors, and polarization of political opinion (Malcolm, 2021). In this era of “post-truth” influence, where individual emotions and beliefs greatly influence public opinion and political decision-making, the concept of truth becomes blurred and is often exploited as “hypocrisy” and “deception”. For instance, in the context of the COVID-19 pandemic, people’s worries about the virus, anxiety and helplessness over the current situation, dissatisfaction with medical conditions, grief over the loss of loved ones, and economic difficulties have led to the accumulation of strong negative emotions and irrational emotions. Thus, irrational catharsis and emotional discourse are more easily manipulated to shape public sentiment (Kwok et al., 2023). Some have linked the virus to specific ethnic communities, such as Asians, without any official confirmation or authority.

As Sismondo (2017) accurately describes, we are currently living in a paradoxical era known as the “post-truth” age. This period is characterized by a disconnect between people’s desire for information and their limited access to it. Unfortunately, this information gap has created an environment where hate speech can rapidly spread on social platforms. When individuals are unable to obtain information through traditional channels, they may become irrational or impatient while waiting for the truth, leading to the emergence of hate speech online. The “post-truth” mentality among certain users remains one of the key factors influencing Internet platforms’ regulation of hate speech dissemination. This clearly highlights the significance of timeliness in digital policy and governance. In order to promptly address malicious speech and prevent harmful information from spreading, platforms should avoid allowing network harm to ferment during periods of inactivity. Eliminating cyber violence at its early stages has also become one of today’s tasks in digital governance.

Profit-driven and political factors do not directly lead to hate speech and online harm. But like a malignant tumor with a long incubation period, they pose a potential threat to the healthy development of network platforms for a long time. From the COVID-19 era to today indirectly exacerbating the online harm caused by hate speech.

News video link: https://www.youtube.com/watch?app=desktop&v=cQd4rbBn89o

During COVID-19, some media outlets ignored the objectivity and accuracy of news reports and used inflammatory headlines to grab readers’ attention. On May 27, 2023, Indian News channel WION (World Is One News) released a news video titled “Wuhan Virus wave grips China, again” on its media platform. Founded in 2016 by India’s Zee Media Group, the news channel has 8.58 million subscribers on YouTube. However, this platform refers to the COVID-19 Virus as “Wuhan Virus” without exact evidence. This has undoubtedly spurred online hate speech discrimination and attacks against the Asian community.

Should digital platforms acquiesce or ban inflammatory chatter from users when regulating it? Is silence chosen to preserve user usage? Especially for these news channels with a large fan base, when manual algorithms cannot accurately identify, manual review becomes the main outlet for expressing the values of the platform. Facebook has admitted that its AI system can only identify 65% of posts that contain hate speech(Mike Schroepfer, 2020), so who decides the remaining 35%? It can be seen that today it is particularly important to formulate ethical guidelines for AI, regulate the research and development of AI technology, and ensure that the development of AI technology is in line with ethical and social values. Digital policies and governance should strengthen the review and regulation of AI algorithms to prevent discriminatory and unfair issues brought about by algorithms.

Hate speech is an elusive killer in the governance of online platforms because, until today, human algorithms have not been able to identify them 100% correctly. The COVID-19 crisis has led to a surge in hate speech on social media, exacerbating global divisions, and exacerbating racial discrimination and mistrust among individuals (Liu et al., 2022). On April 3, 2022, in Shanghai, a woman finds a delivery man to deliver food to her father, who is unable to travel due to COVID-19 traffic restrictions. The delivery man walked for almost four hours to deliver the food. To show her gratitude, she offered monetary compensation, but he politely declined. In the end, despite his opposition, she insisted on paying him 200 yuan as a thank you. However, after the matter spread on social media platforms, many people criticized her for paying too little and carried out online violence against her, causing her serious psychological damage. Three days later, she felt helpless under great pressure and tragically chose to commit suicide by jumping off a building. When people learned of the news online or through online discussions, the hate speech was directed at the delivery man, as netizens believed the reason the woman committed suicide was because the man did not stand up for her.

Looking back at this experience, people can not help but think that if the Internet platform could prohibit users from publishing abusive words, this tragedy would not have happened. Today, with the rapid development of the Internet, the problem of hate speech has become increasingly prominent. The characteristics of the Internet, such as anonymity, concealment and openness, have contributed to the breeding of network violence to a certain extent. How to define the so-called “insulting” words is a proposition that is difficult to answer even if it is a manual review, let alone a computer algorithm system without perfect values.

On March 8, 2024, in China, the Cyberspace Administration of the CPC Central Committee issued the Opinions on Further Consolidating the Main Responsibilities of the Information Content Management of Website Platforms and the Notice on Effectively Strengthening the Governance of Cyber Violence to urge online platforms to actively cooperate with and strengthen the governance of cyber violence with effective measures; The Supreme People’s Court, the Supreme People’s Procuratorate, and the Ministry of Public Security jointly issued the Guiding Opinions on Punishing Crimes of Internet Violence according to Law, clarifying the rules for the application of charges of Internet violence, the rules for handling illegal acts of Internet violence, and the policy principles for punishing crimes of Internet violence (Xu & Qu, 2024). The governance of digital platforms must involve the joint efforts of states and relevant institutions. The Internet is not an illegal place, and hate speech is a a silencer gun for digital platforms, “Button killing” is a crime of intentional murder. From the pandemic to the present, too many cyber tragedies have sounded the alarm for the world.

Since the pandemic, harmful speech is a continuation of social violence in cyberspace, which seriously undermines the governance of digital platforms and even poses potential dangers to social public order and the legitimate rights and interests of citizens. Even with the rapid development of information in the 21st century, people respect and desire freedom of speech. However, harmful speech on the Internet does not belong in this respect. In response to this problem, some countries have joined forces with online platforms to strengthen restrictions on harmful speech and online harm. The German government implemented stricter legal regulations on February 1, 2022, requiring social media networks with more than 2 million users to register relevant content and disclose users’ IP addresses to the federal Police. The posts will also be sent to the newly established Central Reporting Office for Cybercrime Content, where professionals will process the reports.

At the same time, as an intermediary between audiences and information, online platforms have the responsibility to intervene in the spread of harmful speech. Online platforms should strive to mitigate the negative impact of offensive speech on the online environment by enhancing computer algorithms or incorporating artificial intelligence. Wu and Zhao (2022) conducted an analysis of a machine algorithm model called GSTM, which can effectively identify hate speech during the COVID-19 pandemic. The model automatically detects malicious content and timely reports it to the relevant regulatory authorities of the platform. Compared with manual detection methods, the model significantly saves time. In rare cases where a miscarriage of justice may occur, the final decision may be made by an appointed reviewer. Overall, this model has played a vital role in reducing hate speech during this time.

This experience of cyberbullying during the pandemic still contributes valuable advice for digital governance in 2024. Vast network, full of all kinds of people, if really encountered the network violence, I believe that many people like me, at a loss. As The Times move forward, our governance plans to deal with hate speech should also be improved with The Times. In my opinion, as a platform to provide services, can we give solutions at the product level? Cyber violence, according to the development stage, can be divided into three periods before, during and after the incident. Around different stages, we can take different measures. In the early stage of the incident, more attention is paid to user education and risk prediction to reduce the occurrence of source violent content as much as possible. In the middle of the event, it is necessary to provide users with more convenient and effective anti-riot functions to isolate and eliminate information. In the later stage of the incident, more attention should be paid to the accountability and disposal mechanism of the violators.

The stigmatization and promotion of “post-truth” against specific groups during the COVID-19 pandemic has fueled hate speech and online victimization. It is an unprecedented challenge to digital policy and governance, and also provides valuable reference value for the current network platform governance. In this day and age when information spreads quickly on the Internet, it is easier for hate speech to spread and escalate. Therefore, in the governance of the harmonious development of the network environment, it is necessary for network platforms and national institutions to work together.

Bruns, D. P., Kraguljac, N. V., & Bruns, T. R. (2020). COVID-19: Facts, cultural considerations, and risk of stigmatization. Journal of Transcultural Nursing, 31(4), 326–332. https://doi.org/10.1177/1043659620917724

Flew, T. (2021). Hate Speech and Online Abuse. In Regulating Platforms (pp. 91-96). Cambridge: Polity. (pp. 115-118 in some digital versions)

Kwok, H., Singh, P., & Heimans, S. (2023). The regime of ‘post-truth’: COVID-19 and the politics of knowledge. Discourse (Abingdon, England), 44(1), 106–120.https://doi.org/10.1080/01596306.2021.1965544

Liu, Z., Yang, R., & Liu, H. (2022). Concern on cyber violence and suicide during COVID-19 pandemic. Frontiers in Psychiatry, 13, 956328.https://doi.org/10.3389/fpsyt.2022.956328

Malcolm, D. (2021). Post-truth society? An Eliasian sociological analysis of knowledge in the 21st century. Sociology. Advance online publication.https://doi.org/10.1177/0038038521994039

Pluta, A., Mazurek, J., Wojciechowski, J., et al. (2023). Exposure to hate speech deteriorates neurocognitive mechanisms of the ability to understand others’ pain. Scientific Reports, 13, 4127.https://doi.org/10.1038/s41598-023-31146-1

Sinpeng, A., Martin, F., Gelber, K., & Shields, K. (2021, July 5). Facebook: Regulating hate speech in the Asia Pacific. Final Report to Facebook under the auspices of its Content Policy Research on Social Media Platforms Award. Dept of Media and Communication, University of Sydney and School of Political Science and International Studies, University of Queensland.https://doi.org/10.25910/j09v-sq57

Sismondo, S. (2017). Post-truth? Social Studies of Science, 47(1), 3–6.https://doi.org/10.1177/0306312717692076

Wu, X.-K., Zhao, T.-F., Lu, L., & Chen, W.-N. (2022). Predicting the Hate: A GSTM Model based on COVID-19 Hate Speech Datasets. Information Processing & Management, 59(4), Article 102998.https://doi.org/10.1016/j.ipm.2022.102998

Xu, & Qu. (2024, March 10). People’s Voice: Internet bullying “pressing the key to hurt people” must be held accountable. People’s Daily Online.

http://opinion.people.com.cn/n1/2024/0310/c436867-40192654.html

Image from: https://stock.adobe.com/ch_fr/images/hate-speech-danger-as-a-hateful-content-symbol-with-a-gun-emerging-out-of-a-word-bubble-for-social-media-as-a-threatening-message-or-hostile-communication-and-verbal-abuse-threat/553510661

Be the first to comment

Leave a Reply