Clash at the Keyboard: lifting the veil on online hate speech

Imagine you and your friends enjoying a game on the basketball court on a sunny weekend afternoon. The sound of laughter and sneakers scuffling on the ground fills the air. Suddenly, a sharp “Help!” comes from a short distance away, piercing the joy and quiet. You and your friends stop and look in the direction of the sound. At that moment, instinct and kindness prompted you to run towards the voice without hesitation.
This is exactly what 16-year-old Xiao Jie (a pseudonym) experienced. At a river in Chengdu, Sichuan Province, China, he heard the call for help and immediately dropped his basketball and jumped into the river. He saw something floating on the river and hoped that he could save the “person” in danger. But the fact is, after jumping into the river, he realized that it was a pet dog, and Jie finally drowned due to physical exhaustion. Some of the media and “people in the know” posted comments on the Internet revealed that Xiao Jie was the owner of the pet dog, that is, the two girls on the shore of the misguided, lied that it was their sister fell down, the boy jumped down.
The incident sparked widespread discussion on the Internet, but unfortunately many of the comments did not focus solely on sympathizing with Xiao Jie or reflecting on the incident. Instead, they revealed a deep-seated gender dichotomy, with some comments blaming and attacking the two girls, using prejudiced and ironic rhetoric such as “female treasure charm moment” and “stay away from fairies,” escalating the incident from an inter-individual one to a dichotomy between the two genders. The event is not only a story of tragedy and
More than just a story of tragedy and misunderstanding, this incident touches on a broader topic: hate speech on online platforms and the socio-cultural factors behind it. We need to delve deeper into how these platforms have become hotbeds of hate speech and how we can minimize the harm of such speech through improved digital policies and governance. In the digital age, it is crucial to understand and address these issues, which are at stake in the daily life and psychological safety of every online user.

What is online hurt and hate speech and what are the implications?

Imagine that you are following this news story with a heart that is both touched by Jie’s act of kindness and saddened by the end of this tragedy. You are looking forward to the results of the relevant investigations and thinking about how you should react when you encounter a similar situation. However, as you skim through the comments, you realise that some people have simply blamed one of the parties involved and expanded this into accusations against their gender group when the facts have yet to be ascertained. Such uncomfortable comments may be directed at a particular person or group and the content is offensive. This is what we call online victimisation and hate speech, a phenomenon that is becoming increasingly common in today’s digital world. Online harm includes those comments or behaviours that are transmitted through digital platforms and can cause psychological or emotional harm to a person. Hate speech, on the other hand, is speech that spreads hatred, exclusion, or discrimination based on race, gender, religion, or other identity characteristics (Flew, 2021).
When Xiaojie’s tragedy gained widespread attention on major Chinese social media, especially Weibo, it was followed not only by praise for his heroic actions, but also by a series of vitriolic comments mixed with gender antagonism. In China, you’ll find the gender dichotomy particularly pronounced. These comments not only exacerbate gender stereotypes, but also openly express sexism and even gender-specific cyber-attacks, all of which reflect the extension and exacerbation of real-world inequalities in the digital world.
The rapid spread of these statements reveals an interesting and complex phenomenon: the speed and scope of information dissemination has undoubtedly expanded in the digital age, but is this always a good thing? Platform algorithms are designed to prioritise the promotion of content that provokes strong reactions, regardless of the social value of that content.
This phenomenon raises questions not only about freedom of expression, but also about the responsibility of platforms. Are these platforms really just harmless intermediaries? Or should they take more responsibility for the content on them? While discussing freedom of speech, we must also think about the role of social platforms. These platforms are more than just conduits for delivering information, they are actually shaping the way we have public dialogue. Especially when algorithms tend to promote content that provokes strong reactions, such as sexist or misleading information, the impact goes far beyond simple “freedom of speech”. Such content stirs up strong emotions in the public and even begins to influence society’s views on gender roles. We must therefore seriously consider whether this uncontrolled flow of information should be subject to stricter control. Whilst freedom of expression is vital, and we believe it is the cornerstone of free thought and essential to human development, political life and intellectual advancement, hate speech is repugnant as it fuels mistrust and hostility in society and violates the human dignity of the group being attacked (Flew, 2021).
In this regard, the role of social platforms and the effectiveness of their content regulation policies becomes a topic we cannot avoid. Discussing these issues is not only about understanding the incident itself, but also touches on how we can build a more just and responsible digital environment. These reflections may take us further to explore: how we can balance freedom and responsibility in this Internet age, and how we can utilise technology for the betterment of society as a whole.

Exploring the deeper reasons

When analysing gender-opposition in Chinese social media, we have to face several underlying socio-cultural factors. Firstly, gender role stereotypes are still deeply rooted in certain regions and groups, such as the notion that “men are superior to women”, which often puts women in a weaker position in society and in the family. Such culturally rooted prejudices continue to fester not only in the private sphere but also in public discourse. Gender antagonisms are further exacerbated by structural inequalities in the economy. In the workplace, women often face the double disadvantage of fewer employment opportunities and lower pay, an inequality that not only limits women’s economic freedom but also deepens the economic and power gap between the sexes. In addition, the role of cultural media cannot be ignored. The media and cultural industries reinforce gender stereotypes by shaping and disseminating images of the “ideal” man and woman. Men are portrayed as strong and independent, while women are often labelled as “weak” and “dependent”. This cultural feedback loop reinforces the public’s expectations and prejudices about gender roles. The pressures of China’s rapid social development have led to changes in family structures, increased individual independence, and increased stress in life, all of which can lead to a lack of trust and security between the genders, which can easily lead to conflict and resentment.
Together, these factors form a complex socio-cultural context that influences public perceptions and discussions on gender issues. Only by understanding and critiquing these deep-rooted socio-cultural factors in depth can we find effective ways to reduce gender antagonism and promote social harmony.
Having understood these deep-rooted socio-cultural factors, we also need to consider the impact of the technological and regulatory dimensions on such gender-confrontational discourse. How the design and policies of social platforms can amplify or mitigate these socio-cultural contradictions is the key question we need to explore next.
In the world of social media, the speed and scope of information dissemination is unprecedented, but this is not always a good thing. Especially on platforms like Weibo, where a single piece of hate speech can be instantly thrust in front of an audience of millions. But how do these platforms actually handle this flow of content? It’s not as simple as many might think.
Firstly, consider algorithmic recommendation systems, which optimise content to increase interaction, but this may also be contributing to the proliferation of harmful information. These algorithms are often optimised to increase user engagement, but this can also lead to the proliferation of negative or harmful content. Furthermore, the anonymity of the Weibo platform, where users can create multiple accounts to discuss sensitive or personal topics, protects the freedom to discuss sensitive topics, but also emboldens some users, for whom the “consequences” are almost nil (Massanari, 2017). This design is too simple and crude, and does not take into account the diversity of online environments, nor does it effectively curb the occurrence of online violence. This mechanism, therefore, invariably becomes an umbrella that is used to avoid responsibility for misbehaviour.
Let’s look at the platform’s regulatory policy again. Without a large wave of user reports, many problematic contents would not have touched the nerve of the platform at all. This way of handling the situation is obviously passive, and can even be described as lazy. Why do they not take more proactive measures but wait until the problem is unmanageable before intervening? This strategy is not only inefficient, it is also letting the problem go in its early stages, leaving the victims to suffer alone.

Comprehensive Solution Strategies

In the face of gender-oppositional discourse on social media, a comprehensive approach is crucial. Firstly, at the policy level, platforms need to establish stricter content management policies and codes of conduct to ensure that sexist and hateful speech is effectively curbed. This includes stronger monitoring and penalties for non-compliant content and strict enforcement of rules. In this regard, major social platforms should take proactive steps to work closely with groups that are frequently targeted by cyberattacks, so that they can more accurately identify forms of hate speech that disempower these groups and incorporate these specific expressions into the platforms’ regulatory policies (“Facebook: Regulating Hate Speech in the Asia Pacific,” 2021).
In this context, it is critical to understand how well hate speech is regulated at the national level as a strategy for preventing social media violations, how well social platforms such as Weibo address the scope of hate speech in specific national contexts, how its users experience and understand hate speech, and how they respond to their concerns. A common challenge for these platforms is to develop more effective policies and procedures to better manage harmful content, as well as to be more transparent about how it occurs in the interest of freedom of expression (“Facebook: Regulating Hate Speech in the Asia Pacific, ” 2021).
On a technical level, platforms should use advanced algorithmic techniques to identify and filter out speech that contains sexist elements. This is not just through keyword filtering, but machine learning techniques should be used to understand the context of the content to more accurately identify and stop the spread of harmful messages.
Socio-educational and awareness-raising measures are key ways to address gender antagonism. Public awareness of gender issues can be greatly enhanced by comprehensively promoting education on gender equality. Specifically, the promotion of women’s well-being in employment and life should be strengthened, and the strengths of both sexes in their respective fields should be brought into play. It is also crucial to legislate for the protection of the rights and interests of both men and women in marriage to ensure that the law plays its due role in protecting the equality of both parties.
The implementation of these measures is expected to significantly reduce gender-opposed discourse and help build a healthier and more respectful online communication environment. It is only through such comprehensive measures that we can gradually change deeply rooted prejudices and push Chinese society towards true gender equality.


In responding to hate speech and online victimisation on social media, we need more than point-blank interventions; we should adopt a systematic strategy. As the proposals point out, regulating social platforms or the internet as a whole is not an unattainable dream. Indeed, the key lies in adopting a risk management approach and moving away from a traditional content focus to a more holistic monitoring strategy. By implementing a statutory duty of responsible care, we can draw on mechanisms that are known to be effective, and in doing so maintain the flexibility necessary to avoid stifling innovation while ensuring that the basic needs of users are met (Woods & Perrin, 2021).
This systematic approach provides a more rational framework for ensuring that social media is not only a space for free expression, but also a safe and respectful environment. When implemented correctly, this strategy not only reduces the circulation of harmful content, but also enhances the health of the online environment as a whole. And in the process, innovation and user protection should not be seen as opposites, but rather dual goals that can go hand in hand.
In addressing the challenges of the digital age, we must go beyond technological solutions and delve deeper into culture and values. The key lies in developing policies that are transparent and respectful of privacy, democratising governance structures, and educating the public to engage critically with online information. Through these combined measures, we are not just addressing the challenges, but shaping a more just and secure digital future.

Facebook: Regulating Hate Speech in the Asia Pacific. (2021). In Department of Media and Communications, The University of Sydney.
Flew, T. (2021). Regulating Platforms. John Wiley & Sons.
Massanari, A. (2017). #Gamergate and The Fappening: How Reddit’s algorithm, governance, and culture support toxic technocultures. New Media & Society, 19(3), 329–346.
Woods, L., & Perrin, W.H. (2021). Obliging Platforms to Accept a Duty of Care. Regulating Big Tech.

Be the first to comment

Leave a Reply