How does online abuse happen? What and How else may governments and platforms do to limit hate speech?

Wildpixel, Credit: iStock

A tragedy caused by online abuse

On May 25, 2023, at 1:50 p.m., a teacher (Liu) at the Yuehu Campus of Hongqiao Primary School in China, hit and injured a first-grade student (Tan) when he was driving in the school ground. The doctors failed to revive Tan, and Tan passed away in the hospital.

After the accident, Tan’s mother cheered herself up and made several trips to the school, demanding the school officials to apologize to her child and provide a reasonable explanation. She spoke logically and appeared in good spirits, wearing elegant clothes and makeup. This is very great for a mother.

But a netizen noticed and summarized her outfits: “She changed two pairs of Chanel shoes and two black suits in three days.” Such a state of appearance does not meet the expectations of some people for a woman who has lost her son. Thus begins the inappropriate focus and cyber violence. This kind of attention is an obscenely dark mentality and secondary harm to the mother, putting a parent who is suffering the loss of a child in the court of public opinion to be picked on repeatedly.

Figure 1. Building of the incident. (Photo by Zhu Na, CCTV)

Finally, another tragedy occurred. Seven days after Tan’s death, the mother, beaten by both the loss of her son and the bullying of online comments, jumped to her death from a 24-story building, proving her innocence.

How words lead to atrocities?

Flew (2021) argues that hate speech itself does not incite public violence, although there is often a connection between the two. In this incident, there were three stages of escalating online abuse.

The first stage is an unintentional act caused by people’s curiosity about gossip. When it first happened, perhaps netizens were just commenting on some random remarks with mocking overtones. From the public level, wicked public opinion is a social nuisance. When many people are addicted to discussions about gossip, the tragedy is gone, reflection disappears, and the sympathy and comfort that society should own for the victims is gone.

A second phase emerges as time festers and more people get involved in the gossip. Because of the anonymity and distance brought by the Internet, people might think they can escape responsibility and consequences. Gender stereotypes and arguments have been one of the leading causes of online harassment (Flew, 2021). One of the three main characteristics of hate speech is “directed against a specified or easily identifiable individual” (Parekh, 2012). Tan’s’ mother was treated as a typical representative of women.

Figure 2. Screenshot of hate speech.

A significant number of netizens subconsciously believe that a woman who has lost her son should be in a state of “collapse” for a long time, with a disheveled and haggard face, and she can’t make a strong argument toward the school logically. Since the mother’s performance did not match the stereotype in their minds, netizens who had some psychological distortion or discrimination and prejudice against women’s groups saw that Tan’s mother was so strong and wore valuable clothes, and then they became jealous and hateful. They utilized hate speech to vent their anger, including judgmental of her dress up and offensive yellow words. Groupthink got more and more people involved in the online abuse.

This is a continuum between discriminatory statements that do not qualify as hate speech and statements that openly advocate violence against marginalized groups (Cortese, 2006).

Ultimately, the design of platform features such as ‘like’ not only encourages the spread of hurtful words but also gives marketing accounts a chance to hype them up. And so the third stage came.

In China, a marketing strategy uses this hot event to conceal details and amplify rumors to create conflict, gain traffic, and thus profit from selling goods. In the process, harmful words were spread more widely because of the celebrities, therefore bringing a greater psychological impact on the victim, and even became one of the reasons why the protagonist of the incident committed suicide. It is so common throughout society that it has become a concerning trend and culture. Celebrities with an ulterior motive are good at seizing the topic related to misogyny to gain traffic. It’s like an assembly line of yellow rumors, turning the subconscious of male users into a profit-making resource.

In addition, there is a fourth stage to many cyber violence incidents – doxing, which can have even more severe consequences.

In the case of three women’s debates on gender and cultural diversity in video games (Anita Sakeesian, Zie Quinn, and Brianna Wu), their home addresses were publicized by the harassers. They had to leave their homes and cancelled appearances when their safety was threatened. Despite being such a famous author as J.K. Rowling, she has also suffered long-time cyber abuse and intimidation and was denied her writer identity of Harry Potter by the fans in 2020 when she published a series of opinions about the LGBT community and her dissatisfaction with the new bill.

This highlights a process where discriminatory language can lead to intimidation and contempt, making it difficult for victims to participate in their personal and autonomous lives (Parekh, 2012, pp. 43, 45).

Measures introduced by the Chinese government

The jumping incident has attracted widespread social attention, and regulations to combat cyber violence are coming closer. On July 7, 2023, the State Internet Information Office released the “Provisions on the Governance of Cyber Violence Information (Draft for Public Comments),” which mentions that those who utilize cyber violence for marketing purposes or stirring up hateful discrimination (which was mentioned in stage 3 and 2) will be punished severely by laws, and that network service providers should take measures such as suspending profit-making privileges, removing followers, restricting account functions, and prohibiting the re-registration of accounts. Since the regulations were issued, the number of large-scale online abuse on China’s Internet has dropped as Chinese can see. Although the current regulations are imperfect, they are an essential step in legislating online abuse into laws, and governance no longer prefers judicial remedies after tragedies.

On March 8, 2024, the work report of the Supreme People’s Court mentioned that in 2023, in response to online violence, people’s courts at all levels concluded 32 cases of public prosecution for online defamation, with 85 guilty verdicts, a year-on-year increase of 10.3% and 102.4%, respectively. In the case mentioned in the first part of this blog, on July 14,2023, according to the Ministry of Public Security Bureau of Internet Security, Chengdu Internet security, according to the law, Qi was administrative detention because of his rumor that Tan’s mother had received 2.6 million compensation. In addition, in some cases, the perpetrators of insults leading to death have been sentenced to fixed-term imprisonment.

Be wary of confusion caused by over-regulation

However, overly strict legal controls may not be applicable in most countries due to differences in social and cultural background, which could arouse a heated debate on whether the legislation could limit freedom of expression.

On April 1, Scotland’s new hate crime law has come into force. The Bill creates a new crime of stirring up hatred against any of the protected groups covered by the Bill .However, the Bill does not include gender as a protected characteristic, and the new law makes no reference to misogyny, leaving women unprotected.

Figure 3. Screenshot of hate speech on Twitter.

The internet went too far with hate speech toward the new law, with some of the words suggesting that people protest (and even some of the heated words were meant to induce fear and division). By April 2, protesters took to the streets against the police as a way to express their dissatisfaction with the new hate crime law. We don’t know if there will be more chaos after this, but it is clear that over-regulation of speech by the government can, to a certain extent, do the exact opposite of ‘controlling people’ in a democratic society and even cause chaos.

Figure 4. Screenshot of Protests against the new law. (2024)

The Current Regulations of Platforms

Due to the frequency of hate speech, social media platforms have put certain mechanisms in place. For example:

Weibo: Article 36 of the Community Convention claims that users must not publish smear campaign or promote hateful information. In the Infringement Complaint Zone, for those who utilize hot topics for malicious marketing and distorting the original intent, the platform will imply measures such as deleting, blocking, prohibiting the use of functions, prohibiting being followed, prohibiting the modification of account information, and logging out of the account, etc., for the offending content and accounts (Weibo, 2021).

Facebook & Instagram: Meta utilizes its own technology to proactively monitor and remove problematic content (although it sometimes mistakenly removes common words) and introduces trained review teams to search for potential violations. For offending content, Meta will remove it, reduce the distribution of problematic content, or add a warning. Repeat offenders will have their accounts restricted or even disabled.

Tiktok: One thing that TikTok does better than the above two platforms is that it has a clearer definition of hate speech and clearly states the rules of what is not allowed to be posted, such as ‘Demeaning someone based on their protected attributes by saying or implying they are physically, mentally, or morally inferior, or calling them degrading terms’ (Tiktok, 2023).

Figure 5. Screenshot of TikTok regulations.

However, regulation is very difficult in practice because of the lack of a unified standard for identifying hate speech, competing intentions among platforms, and insufficiently intelligent algorithms. The platforms would rather pay compensation in many cases and let small-scale bad speech spread, which could turn into widespread online abuse and social disorder, leading to the failure of regulation.

What more can the Government and the platform do?

First of all, in light of the above incident, it is crucial to find a balance between freedom of speech and regulating hate speech. To develop standards, the government and the platform should consider firstly collecting a large number of information case samples and categorize them, distinguishing between well-intentioned criticism, public opinion monitoring, and hate speech, in order to increase the accuracy of identifying the content online abuse, thus formulating the law more clearly. For example, we can detect hate speech patterns and most common unigrams and use these along with sentimental and semantic features to classify tweets into hateful, offensive and clean and then analyze them (Furqan, Mazen, Istiyak, Mirza, Abdul, 2020).

Secondly, in the first stage of the incident, for the gossiping audience, the platform might consider guide ordinary netizens towards goodness. Regarding user-quality education, some platforms have made efforts. For example, on Bilibili, users must pass a quiz to qualify for comment. In the test, the viewers are persuaded to maintain a respectful attitude towards views they disagree with so as to make the platform harmonious. In addition, after passing the test, the user’s membership level will go up due to their positive sharing, commenting and liking behaviours. After a user has been active on the site for a year, has reached level 3, has not violated any rules in the last 90 days and has passed real-name authentication, he can apply to join the Discipline Committee. Becoming a member of the Discipline Committee means that when browsing content, users can vote internally to help clean up bad speech and help recommend quality content. Users who fulfil the target can be rewarded accordingly (Bilibili, 2022). This policy helps to increase the motivation of users to monitor each other and further purify the community environment better.

Figure 6. Screenshot of Bilibili test.

Finally, in China, many social accounts can only be registered with a cell phone number, which is authenticated by real-name verification before they can be applied. This makes it easier for the police to locate hate speech initiators and punish them accordingly. Therefore, the real-name system for social accounts is a very effective measure to regulate and control hate speech. Governments of other countries may consider implementing similar policies, such as introducing and developing an internet-based credit system to social platforms according to their circumstances and linking governments, platforms, and law enforcement officers to reduce online abuse and hate speech more efficiently.


While all people need freedom of expression, it should not become a talisman for hate speech. In addition to individual self-awareness, platforms should take further concrete measures, refine regulatory standards, and simplify the complaint process to reduce online hate speech and the frequency of online abuse tragedies and social disruption. This means that platforms have to take up their social responsibility, not just to make a profit and relax the control of hate speech to attract users. The government should introduce policies and laws as soon as possible to supervise private platforms and severely punish cyber-violence activists.


Bilibili. (2022). The Code of Bilibili Disciplinary and Ethos Committee, version 20220224. Bilibili

Brooks, L. (2024, April 1). Scotland’s new hate crime law: what does it cover and why is it controversial? The Guardian.

Community Management Center. (2021). Infringement Complaints Area for Businesses. Weibo

Community Management Center. (2021). Weibo Community Convention. Weibo

Cook, J. (2024, April 1). Scotland’s new hate crime law comes into force. BBC News.

Cortese, A. (2006). Opposing Hate Speech. New York: Praegar Publishers.

ElSherief, M., Shirin Nilizadeh, Nguyen, D., Vigna, G., & Belding, E. (2018). Peer to Peer Hate: Hate Speech Instigators and Their Targets. ICWSM 2018.

Faiz, F. M., Zaheer, M. Y., Alam, M. L., Baig, M. S., & MD, A. R. (2019). Hate Speech onSocial Media: A Pragmatic Approach To Collect Hateful and Offensice Expressions and Perform Hate Speech Detection. Journal of Resource Management and Technology, 10(2), 18–21. ISSN NO: 0745-6999.

Flew, Terry. (2021). Hate Speech and Online Abuse. In Regulating Platforms. Cambridge: Polity, pp. 91-96

GB News. (2024). There are concerns a law like this will turn the public against the police. Today’s protest is a sign of that happening already. Twitter.

Hate Crime and Public Order (Scotland) Act 2021, (2021).

Hate Crime and Public Order (Scotland) Bill, (2021).

House of Commons Home Affairs Committee. (2017). Hate Crime: Abuse, Hate and Extremism Online. London: House of Commons.

609.pdf (

Kejia Bai. (2024). Supreme Court: severe punishment for malicious initiators and organizers of cybercrime repeat offenders Cybercrime must pay a price. Chinese Youth News.

Liaoshen Wan Bao. (2023). “Wuhan schoolchildren were hit and killed” latest news: his mother fell dead. Pengpai News.

Matchett, C. (2024, April 3). Humza Yousaf targeted with MORE hate crime complaints than JK Rowling. The Scottish Sun.

Meta. (2023). Policies and Enforcement. Facebook & Instagram.

TikTok. (2023). Safety and Civility. TikTok.

Parekh, B. (2012). Is there a case for banning hate speech? In M. Herz and P. Molnar (eds), The Content and Context of Hate Speech: Rethinking Regulation and Responses (pp. 37–56). Cambridge: Cambridge University Press.

Sky News Australia. (2024, April 5). Why JK Rowling is “brave” for calling out “woke” Scotland. Sky News Australia.

Sinpeng, A., Martin, F., Gelber, K., & Shields, K. (2021, July 5). Facebook: Regulating hate speech in the Asia Pacific. Final Report to Facebook under the auspices of its Content Policy Research on Social Media Platforms Award. Dept of Media and Communication, University of Sydney and School of Political Science and International Studies, University of Queensland.

Zhaoli Jin; Lili Ran. (2023). Net Violence Governance in Law Approaches. BBT News.

Be the first to comment

Leave a Reply