The peace of human is slowly being dismantled by hate speech and online hazards

Hate Speech, from Selma Partners, April 2019

“The Chinese Virus”

Since the Covid-19 outbreak in 2020, the proper term for the virus has been maliciously changed to “Chinese virus” on major global social platforms such as Twitter and facebook. This is a classic example of hate speech linking the COVID-19 pandemic to a particular race. Such remarks will cause people of all nationalities in the world to point the finger at China and generate hatred towards China. As the COVID-19 pandemic has unfolded, Hate Speech on social media about China and Chinese people has encouraged social stigmatization(Fan, Yu and Yin, 2020). We can view this behavior as inciting people all over the world to intensify prejudice or discrimination against China and even Asians. Because of the spread of hate speech, most people, except medical researchers, have shifted the fighting target from the virus to the nation, and the Chinese have become the source and transmitter of the epidemic in this global catastrophe. Some politicians even use it as a weapon to make anti-Chinese remarks and proposals. Trump delivered a speech on China, criticizing China through the COVID-19 epidemic and other issues, and announced the termination of entry of some Chinese students and scholars into the United States(Wu, 2021).

U.S. Representative Louis Frankel responds to Trump’s hate speech

When a politically influential public figure openly expresses hate speech, there is no doubt that the negative sentiments of the masses against another people will be exacerbated, and racial discrimination will be intensified under the control of such absurd remarks.

Interpret out of context

During the Covid-19 pandemic, some foreign media have falsely “exposed” China’s reports on the novel coronavirus outbreak out of context. There are two cases here. First, the content publisher maliciously exaggerates the facts or completely distorts the facts to fabricate “news”; Second, the video or picture material released by the content creator is true, but the accompanying text explanation is distorted, or only a clip of the video is exaggerated, misleading other users’ opinions, and causing negative public opinion. The BBC, as a mainstream British public media, used camera language to smear China more than once during the epidemic. BBC China correspondent John Subworth used some of the footage to fabricate the story of China’s attempt to “cover up the epidemic” and to trick global users into believing that Wuhan, China, was the starting point of the virus.

Video Screenshot

In addition, the BBC has also demonized a video of Chinese police conducting anti-terrorism drills as “violent law enforcement and human rights violations” by Chinese epidemic prevention authorities. Only in the second half of the video can clearly see the words “anti-terrorism drill”, but the reporter of this news did not correctly point out the real content of the video.

In the second half of the video, can see the words “anti-terrorism drill”

I was studying my undergraduate program in the UK when these two fake news stories were released, and I already had a lot of negativity in my heart because I couldn’t go home due to China’s anti-epidemic policy, and with these news stories, I admit that I was also incited by the bad forces to hate my own country, and tried to speak up for what I thought to be the “truth” on Chinese social media (due to the firewalls on China’s internet, I don’t think people in China can see the whole picture, but did I see it? ). This is the horror of hate speech, which can even destroy a person’s faith. In the face of important facts, especially events that concern all of humanity, the overwhelming news is constantly influencing users’ views and opinions. How is the malicious spreading of rumors not an outgrowth of hate speech, where people are more inclined to believe what they want to believe when negative views are preconceived, and who cares about the facts? Users exhibit greater cognitive activity when news headlines align with their political opinions, and they are more likely to believe them. Headlines that challenge their opinions receive little cognitive activity (i.e., they are ignored) and users are less likely to believe them (Moravec et al., 2018). Unfortunately,according to the research (Mathew et al., 2019) we observe that the content generated by the hateful users tend to spread faster, farther and reach a much wider audience as compared to the content generated by normal users. This means that hate speech, once it begins to be disseminated, can easily and completely upend a person’s views and can cause permanent damage to a person or even a nation.

Dissemination of truthful but potentially harmful information

The malicious spreading of rumors must be an act against moral and social norms, but there is also a situation where the content of the communication is true but provokes feelings of national hatred. According to international human rights standards, citizens are allowed to express their views on events. In China at that particular time, Chinese netizens’ voices about the real situation were blocked by the platforms because the content they wanted to publish was extremely detrimental to national unity and the implementation of state policies. For example, the Sitong Bridge banner incident in Beijing.

Photos from the scene of the incident

This news of resistance to an irrational anti-epidemic policy has gone virtually unnoticed in China, as similar images and textual expressions are blocked from Chinese social media platforms as soon as they are posted. Are they blocked because they are hate speech? The incident used a radical method to gain attention, but its purpose was to solve the problem rather than to arouse national hatred. Indeed, from the perspective of the person concerned, he was speaking out for the benefit of more people, but from the perspective of the country, this behavior will undoubtedly incite more people to resist the policy with unreasonable methods or even violence, and once the masses are incited, the consequences could be disastrous, especially for a country with a population of 1.4 billion people. So even if the news is newsworthy but it is not in the public interest, the platform’s crackdown is correct. This does not mean that we should turn a blind eye to the truth, but rather encourage users to use reason to articulate the facts and use reasonable methods to speak up for themselves. The promotion of freedom of expression is not in conflict with the fight against hate speech, and the key to balancing the two lies in media literacy. In addition to platform regulation, it is more important to improve media literacy so that users can have the ability to speak sensibly and think calmly to recognize right from wrong. Improving media literacy will enable users to avoid being misled by wrong information and to correctly express true information on Internet platforms. When users are no longer easily attracted to hate speech (bad information, fake news, extreme speech etc), most of the publishers of hate speech will feel bored and stop their wrong behavior. Without a democratic and critical approach to media literacy, the public will be positioned merely as selective receivers, consumers of online information and communication(Sonia, 2003).

Online abuse

The establishment of the Internet has provided us with a virtual space where we can speak freely, but it has also brought disadvantages. The reason why people dare to shout insulting words such as “Chinese virus” across the Internet platform is that the Internet platform itself has a vague control over particular speech, and it is unable to accurately recognize every statement of a hate speech nature. Therefore, even if a user makes a speech that violates moral and social norms, he or she will not be substantially punished. The fact that platforms do not accurately judge the type of speech is also a sign of online abuse, and one of the consequences of online abuse is the ease with which hate speech can be disseminated, as it is difficult to clean up or prevent the dissemination of such easily published speech without strong intervention by the platforms. But the concern is that when platforms are unable to recognize hate speech and users report it, the platforms will ignore the reports because their own monitoring systems are once again unable to recognize the type of speech. They were ‘confused’ by Facebook’s definition of hate speech because many times when they reported what they interpreted as hate posts, their complaints were not upheld(Sinpeng et al., 2021). Therefore, it is a very difficult and complicated thing for platforms to regulate hate speech. It is also worth discussing that the platforms’ own filtering mechanisms for hate speech are one aspect, but the more significant problem is that both state-owned and private media are to some extent controlled by governments. The media bias of a social media platform will favor whichever country it is affiliated with (e.g., the BBC, the official news outlet of the United Kingdom, as I mentioned above), so hate speech about nation is difficult to completely eliminate in this media environment where the state indirectly controls the media, as social media platforms are also a powerful political weapon. Even if the station owner is private, however, the government may be able to “indirectly” control news content, providing subsidies, government advertising, or outright bribes to encourage the private owner to bias coverage away from the commercially optimal editorial policy(Gehlbach & Sonin, 2001).

How political parties manipulate the media | ABC News

Therefore, cultivating users’ ability to discriminate media content may be more important than filtering out inappropriate speech.

“Safe havens” for verbal violence

To this day, major global social media platforms searching for topics related to Covid-19 can still see netizens verbally attacking Chinese netizens and even China in response about Covid.

Screenshot from Twitter

Due to the anonymity and regulatory loopholes of the Internet, people can freely express their opinions and at the same time attack and insult other users with impunity, which is also a great disadvantage of Internet abuse. Due to the privacy and confidentiality of user identity information on the Internet, it is easier for users to behave in a way that is completely inconsistent with offline social interaction under the protection of their virtual identities. Hermawati et al. (2021) indicates social media provided a space for teenagers to have the identity they desire. This identity led teenagers to become another people to cover up their shortcomings. This has also made the phenomenon of Internet violence rampant. This online phenomenon of verbal attacks not only exists between two countries, but more distressingly it also exists between two compatriots. Below is an interview I conducted with an international student who returned to China from the UK during the 2020 outbreak, simply to return to his home but was subjected to human flesh search  that was even more heinous than Internet violence.

Screenshot from WeChat

From the interviews, it was learned that during his return to China to undergo quarantine as required (at that time, the Chinese people were paying close attention to overseas returnees, and he unfortunately became the first case of overseas importation in the region), his home community address was learned by netizens through the media channels, which was used as a clue to illegally obtain his own identity information and telephone number. Without knowing the identity of the other party, he was subjected to an avalanche of text message threats, verbal harassment and Internet violence. In the end, however, these people did not pay any legal responsibility for their actions, but the interviewee was forced to accept nearly half a month of verbal abuse, harassment and intimidation from his compatriots. Due to the special characteristics of the Internet, it has become a safe haven for Internet “abusers”, and this situation continues to develop. The large-scale circulation of hate speech and other forms of online abuse on digital platforms have become significant and growing issues of concern(Flew, 2021).


Given the complexity of the current global media environment, we cannot rely on further improvements in platform control policies to avoid all online hazards. While online hazards are pervasive, media literacy can be improved from the perspective of the users themselves to maintain the Internet environment.


Fan, L., Yu, H., & Yin, Z. (2020). Stigmatization in social media: Documenting and analyzing hate speech for COVID ‐19 on Twitter. Proceedings of the Association for Information Science and Technology, 57(1).

Flew, T. (2021). Hate speech and online abuse. In Regulating platforms (pp. 91-96). Cambridge, MA: Polity Press.

Gehlbach, S., & Sonin, K. (2001). Government Control of the Media. SSRN Electronic Journal, 118, 163–171.

Hermawati, T., Setyaningsih, R., & Nugraha, R. P. (2021). Teen Motivation to Create Fake Identity Account on Instagram Social Media. International Journal of Multicultural and Multireligious Understanding, 8(4), 87–98.

Mathew, B., Dutt, R., Goyal, P., & Mukherjee, A. (2019). Spread of Hate Speech in Online Social Media. Proceedings of the 10th ACM Conference on Web Science – WebSci ’19, 173–182.

Moravec, P., Minas, R., & Dennis, A. R. (2018). Fake News on Social Media: People Believe What They Want to Believe When it Makes No Sense at All. SSRN Electronic Journal, 43(4).

Sinpeng, A., Martin, F., Gelber, K., & Shields, K. (2021). Facebook: Regulating Hate Speech in the Asia Pacific (p. 33). Department of Media and Communications, The University of Sydney.

Sonia, L. (2003). The changing nature and uses of media literacy (pp. 1–2). London School of Economics and Political Science.

Wu, X. (2021). Trump’s Impact on China-U.S. Relations and Anatomy of U.S. Policy Toward China. FUDAN JOURNAL( Social Sciences), 5, 170.

Be the first to comment

Leave a Reply