Nowadays, when it comes to the topic of the Internet, it is impossible to deny that underneath the advantages of the highly interactive and global nature lie many key problems. The conflict between different groups in social media is a major source of incitement to violence and disruption of harmony, which can involve the spread of ideologies, conflicts of interest and opposition to individuals or groups. The complex digital ecosystem created by the intertwining of platforms requires censorship, governance, and regulation by the platforms themselves to maintain a civilized environment, avoid harming others and create a positive user experience (Flew, 2021). This blog post will review and analyze a powerful and years-long hate campaign in Chinese social media, the “Cai Xukun incident”, from a misogynistic perspective. Finally, we will look at the challenges of managing hate speech on platforms and possible solutions.
What is hate speech?
Have you ever been attacked by an unknown user after speaking on a public platform? Whenever you post content on a public platform, it is likely to be recommended by the algorithm to “people who might be interested”. When groups of people on the internet are aggregated by algorithms, individuals are more likely to choose to view and engage with content that fits their preferences and is within their comfort zone. As Sunstein (2008) suggests with the concept of ‘information cocoons’, people are confined to a limited perspective, which can lead to groups becoming paranoid, and the protection of the ‘information cocoon’ allows them to look out for each other and “feel good”.
Hate speech is different from mere online trolling and personal attacks but represents a form of identity violence. It is the incitement of hatred and violence against a person or group of people based on the inherent identity characteristics of that person or group – such as religion, race, nationality, gender, sexual orientation, physical status, etc. (Flew, 2021). Increasingly more people are using social media to incite violence against minority groups, individuals, and activists. Based on the stigmatization of group identities, the rise of hate speech is natural: because they are inferior, sick, evil, and abnormal, they deserve to ‘die’, ‘fxxk off’, be oppressed, ostracized, discriminated against, attacked, and even eliminated. They deserve to be oppressed, ostracized, discriminated against, attacked, and even exterminated. In addition to the psychological damage that can be caused by verbal violence, those who create and spread hate speech online are also inciting others to do so, as these statements create a strong public opinion that attacks and violence against these groups are justified. Individuals involved in these incidents of hate speech generally see their actions as okay or noble yet are actually engaging in what is ostensibly considered immoral behavior (Massanari, 2016).
In the Harm in Hate Speech, Jeremy Waldron (2012), a professor at New York University, states that social inclusion, i.e., accepting and understanding all groups, is a very important value in society. In a diverse society, society can provide a sense of security for every member of the community to protect individuals from hostility, violence, and discrimination. And it is this sense of social security that is undermined by hate speech.
Wait, have you ever seen online misogyny?
One of the underlying reasons for the continued increase in hate speech against women on online platforms is the unequal digital structure, with females having less access to the internet than males. In South Asia, for example, women have nearly 35% less Internet access than men, creating a digital gender divide that hinders social development and reduces the potential for equal gender participation (UN Women, 2021). According to Frenda et al. (2019), typical features of the internet, such as anonymity and interactivity, minimise users’ authority and constraints in promoting sexist attacks. This is particularly notable in specific environments dominated by mostly male discourse, such as video game environments, where social dominance and male norms exist, and women must conform to them. Female players are often defaulted to as incompetent and sensual, and when they operate incorrectly, they are attacked, abused, and ostracised by other male players on the grounds of their female status.
Virtually all women can be the target of sexist hate speech, no matter how good their background or status. Data shows that female politicians are more likely to be threatened by the internet than men, with female politicians in India receiving an average of 113 hostile or abusive tweets per day, which equates to one in seven tweets about female politicians, one in five of which are sexist or misogynistic (UN Women, 2021).
The online abuse of women is one of the typical practices of online misogyny, reinforcing the gendered power structures of male oppression of women, attempting to destroy women’s sense of security and self-confidence by denigrating and suppressing them, as they attempt to create a hegemonic discourse of internet masculinity. However, misogyny is much more than just a form of practice. Do you know the difference between misogyny and sexism?
Misogyny may be different from what you believe…
Kate Manne (2019) is convinced that the traditional ‘female-hating’ definition of misogyny is too simplistic, defining it as an attempt to punish and control women who challenge male dominance. Misogynistic people are not misogynistic; they show appreciation and praise for women who uphold the status quo and then punish women who reject female subordination (Illing, 2017). While both sexism and misogyny serve to rationalize and justify patriarchy, the difference is that misogyny is the ‘law enforcement’ branch of patriarchy. In brief, the law distinguishes between ‘good citizens’ and ‘bad citizens’, and misogyny has its own set of rules for distinguishing between ‘good men’ and ‘bad men’, ‘good women’ and ‘bad women’. The following paragraphs will highlight the often overlooked ‘bad man’ branch.
Why are men attacked by misogyny too?
Sedgwick’s (2015) theoretical device of ‘male homo-social desire’, ‘homophobia’ and ‘misogyny’ illustrates that the ‘masculinity’ that is unified by male homo-social desire is defined by the ‘othering’ of women. The ‘masculinity’ that is uniformly emphasized by social desires is defined through the ‘othering’ of women. From crying and sensuality to loving rag dolls, growing hair and wearing dresses, from personality to dress and behavior, it is unacceptable for men to be feminine because femininity implies weakness, which is a demeaning and stigmatizing aspect of femininity.
Terry Kupers (2005) defines ‘toxic masculinity’ as a set of masculine traits that are a drag on society and are characterized by the devaluation of women, a sense of dominance, the suppression of feelings and excessive autonomy. According to research (American Psychological Association, 2018), men may become aggressive because their self-perceptions fail to meet masculine expectations or because they are emotionally repressed.
For males, gender temperament significantly influences the likelihood of verbal violence, ostracism, and isolation. The more feminine a man is, the more he is characterized as a ‘bad man’ by misogynists, and the more likely he is to be subjected to bullying and hate speech. The violence promoted by toxic masculinity has a much lower cost of practice in online hate speech. Thus, not only women are threatened by misogynistic hate speech but also men who lack masculinity and are classified as subordinate.
“The Cai Xukun Phenomenon”
The Chinese sports forum Hupu and the anime video site Bilibili are typical of the predominantly male user base of Chinese social media, with more than ninety per cent of Hupu’s user base being male (Fang et al., 2022). Since 2019, an online violence against Cai Xukun, an idol who debuted first on the Chinese talent show Idol Producer and has a large female fan base, has been ongoing. The fact that Cai Xukun is sought after by girls as a male who is perceived as “lacking in masculinity” is a challenge to patriarchy. They react to extreme manifestations of masculinity, such as violent and subordinate groups, when changes in society are perceived as detrimental to male dominance (Scaptura & Boyle, 2019).
The incident first started when Cai introduced himself on Idol Producer, saying he liked singing, dancing, rapping and basketball, and went on to perform a basketball and dance routine to the soundtrack of ‘Just Because You’re So Beautiful’. His neutral appearance and perceived ‘showboating’ moves have led to a collective attack on him by Chinese men. They argued that Cai’s appearance lacked masculinity and that “this damn thing ruins the image of Chinese men” (Wang et al., 2022), and even misinterpreted the phrase “just because you’re so beautiful” as “chicken you’re so beautiful” (Wang et al., 2022), and even misinterpreting the phrase as a derogatory proxy for Cai. In Chinese, the word chicken is homophonic with the word for a prostitute and has the connotation of sodomy.
Furthermore, what made things worse was the NBA’s invitation to Choi as an ambassador. Despite his love for basketball and his skills, Tsai became synonymous with sissy to straight men and not worthy of being a “male representative of basketball”. A Hupu survey asked users what they thought of Cai as an NBA ambassador. Of the 44,983 users who participated in the survey, 40,432 (89.9%) rated him negatively, including “trash” and “I choose death” (Wang et al., 2022).
Firstly, they have established a set of insulting strategies that include multiple branches. The first strategy is sarcastic, using derogatory terms to dehumanize and insult Cai Xukun, making him a target of ridicule. The second strategy is to degrade him by feminizing him, claiming that he has undergone penis removal and breast augmentation surgery. The third strategy involves explicit death threats, saying that they will kill him and his family and dismember him. Obscene and offensive language fills all Weibo message boards related to Cai, and those who express hate speech turn it into a carnival. Despite Weibo having a relatively sophisticated mechanism to block extremist speech, users are adept at using homophones to evade censorship.
In 2020, the Ministry of Education of the People’s Republic of China’s proposal for the “feminisation of youth” was the subject of a Twitter trend. The proposal emphasises the cultivation of masculinity in students, with feminine males being the target of justified repression. Weibo in the context of the party-state is a tool for the government to make its voice heard, and the ideology and gender binary promoted by the government played a role in contributing to Cai Xukun’s case.
Secondly, they use humor and meme strategies to produce videos that satirize Cai, with bloody violence and malicious attacks mixed in. Nevertheless, Bilibili’s platform policies of tacitly allowing humorous content, as well as its sharing, liking, and disliking functions, have contributed to the spread of these memes and hate speech. The platform’s social algorithm management also facilitates the spread of controversial humor content (Matamoros-Fernández, 2017), such as recommending controversial content automatically under videos related to Cai.
Finally, any women who participate in and attempt to defend Cai have been subjected to the same misogyny and hate speech, and they have to remain silent to protect themselves from threats.
The reason why Cai Xukun’s case can be called a “phenomenon” is that this is not uncommon in China. Almost all men who dress themselves up, regardless of whether they are celebrities or not, will suffer from similar hate speech.
Lastly, who is responsible for hate speech?
Legal restrictions, platform governance and self-governance are the most powerful combats to hate speech. But again, this involves a balancing act against freedom of expression. Managing hate speech and online harm on internet platforms may require the implementation of policies and enforcement of regulations, which can place restrictions on freedom of expression and risk abuse of censorship bodies (Flew, 2021).
Misogyny is a pervasive and often invisible condition, which is sometimes confusing. The difficult nature of misogyny to be accurately identified dictates that platforms cannot be censored and banned 100% of the vast amount of content available. Current AI regulation is limited in its ability to identify hate speech, and manual screening is essential.
For China’s social media environment, government legislation and official opinion guidance are paramount. Those who publish hate speech need to be aware of the legal costs they will have to pay. Official bodies should raise the standard of regulation of speech on social media so that no group is likely to be discriminated against as a result.
There are indeed limits to what we can do, but as stated above, hate speech is rooted in a range of prejudices, and if freedom is based on prejudice and can cause harm to people, then that freedom should have boundaries. Self-censorship before publishing a speech is necessary, and only a rational and quality public dialogue is likely to improve the negative environment in which prejudice and hatred thrive.
American Psychological Association. (2018, September). Harmful masculinity and violence. American Psychological Association. https://www.apa.org/pi/about/newsletter/2018/09/harmful-masculinity
Fang, J., Li, R., & Yang, C. (2022, January 17). Straight Guy Index: Conversation Strategies of Social Platform in Formulation on the Collective Identity of the Oppressed Minority. Www.atlantis-Press.com; Atlantis Press. https://doi.org/10.2991/assehr.k.220105.243
Frenda, S., Ghanem, B., Montes-y-Gómez, M., & Rosso, P. (2019). Online Hate Speech against Women: Automatic Identification of Misogyny and Sexism on Twitter. Journal of Intelligent & Fuzzy Systems, 36(5), 4743–4752. https://doi.org/10.3233/jifs-179023
Illing, S. (2017, December 5). What we get wrong about misogyny. Vox. https://www.vox.com/identities/2017/12/5/16705284/elizabeth-warren-loss-2020-sexism-misogyny-kate-manne
Kupers, T. A. (2005). Toxic masculinity as a barrier to mental health treatment in prison. Journal of Clinical Psychology, 61(6), 713–724. https://doi.org/10.1002/jclp.20105
Manne, K. (2019). DOWN GIRL : the logic of misogyny. Oxford University Press.
Massanari, A. (2016). #Gamergate and The Fappening: How Reddit’s algorithm, governance, and culture support toxic technocultures. New Media & Society, 19(3), 329–346. https://doi.org/10.1177/1461444815608807
Matamoros-Fernández, A. (2017). Platformed racism: the mediation and circulation of an Australian race-based controversy on Twitter, Facebook and YouTube. Information, Communication & Society, 20(6), 930–946. https://doi.org/10.1080/1369118x.2017.1293130
Scaptura, M. N., & Boyle, K. M. (2019). Masculinity Threat, “Incel” Traits, and Violent Fantasies Among Heterosexual Men in the United States. Feminist Criminology, 15(3), 278–298.
Sedgwick, E. K. (2015). Between Men English Literature and Male Homosocial Desire. Columbia University Press.
Sunstein, C. R. (2008). Infotopia : how many minds produce knowledge. Oxford University Press.
UN Women. (2021). Eliminating online hate speech to secure women’s political participation. UN Women – Asia and the Pacific. https://asiapacific.unwomen.org/en/digital-library/publications/2021/04/eliminating-online-hate-speech-to-secure-women-s-political-participation
Waldron, J. (2012). The Harm in Hate Speech. Harvard University Press.
Wang, Y., Mao, L. L., & Smith, A. B. (2022). Appraisal and coping with sport identity and associated threats: exploring Chinese fans reactions to “little fresh meat” in NBA advertisements. European Sport Management Quarterly, 1–22. https://doi.org/10.1080/16184742.2022.2065512