- Have you ever experienced it?
- The Internet makes it impossible to hide.
- Do you think it was personal? Or is it about the label you represent?
Image Source: Kids helpline.
Summary. In the Internet age, if you like using social media or other platforms to share something with the public, you will likely encounter online harm. At the same time, the Internet has no distinction of race, gender, class, or region. “Freedom of speech” enables everyone to speak freely but also makes “hate speech” nowhere to hide. The Internet has made it easier for people worldwide to interact on platforms. Still, it has also made it easier for people to be maligned by malicious words and actions worldwide. Up to now, well-known platforms and enterprises, such as Facebook, Twitter, Spotify, YouTube, etc., have formulated and published human rights policies or hateful behavior policies to appeal to and warn users to abide by the convention.
Nevertheless, hate speech and online harms exist in Chinese communities with few connections on these platforms, which makes many Chinese users uncomfortable. At the same time, because of cultural differences, Chinese Internet governance plans are also different. Therefore, this blog will mainly critically discuss the current situation and governance plan of hate speech and online harm from Chinese Internet communities, such as Weibo. In addition, they will compare to governance plans in other cultures (such as Australia) or other platforms (such as Facebook, and Twitter).
Hate Speech & Online Harms: Is it What You think it is?
In 1997, Council of Europe defines Hate Speech (Council of Europe, 2002):
‘All forms of expression which spread, incite, promote or justify racial hatred, xenophobia, anti-Semitism or other forms of hatred based on intolerance.’
In 2019, UN Strategy and Plan of Action on Hate Speech defines it as (Rights for Peace, n.d.):
Communication that ‘attack or uses pejorative or discriminatory language with reference to a person or a group on the basis of who they are, in other words, based on their religion, ethnicity, nationality, race, colour, descent, gender, or other identity factor’.
Hate Speech should be used to mean a speech that is harmful enough, not insulting or offensive speech that should not be regulated (as this falls under online abuse or harms). Mere offense or hurting one’s feelings should not be the standard for regulating speech in civil or criminal law. In order to be regulable, hate speech must cause enough harm to the target to justify regulation, consistent with other harmful behavior regulated by the government (Sinpeng et al., 2021). To more specifically define the categories of hate speech that should be regulated, hate Speech should have the following conditions:
- It needs to happen and take place in public, and it is foreseeable that others will inadvertently come into contact with the hate speech;
- In the political and social context in which the speech takes place, it addresses members of groups that have been systematically marginalized. This means that members of groups that are not systematically marginalized should not be able to claim the protection of hate speech laws. Marginalized people are groups and communities that experience discrimination and exclusion due to unequal power relations at the economic, political, social, and cultural levels (National Collaborating Centre for Determinants of Health, n.d.). For example: ethic and religious minorities, disabled persons, LGBTQ people and women (Yang, 2022);
- Speech can be derived from forms and institutions. For example, leaders in the workplace, police officers or representatives in parliament all have formal and institutional power;
- Speeches can be derived informally. For example, when others actively respond to hate speech, or when they share it, both of these actions amplify the visibility of hate speech;
- Speech is a subordinate act that inserts structural inequities naturally into the context in which speech occurs. Hate speech classifies targets as inferior, legitimizing discrimination against them.
(Sinpeng et al., 2021)
To be more specific, a man says “I hate you” is not a hate speech, but if he says “it is not something that you women are supposed to do” is a hate speech.
As for online harm, the GOV.UK (2020) mentions that the spread of terrorist and other illegal or harmful content on the Internet, the destruction of civil speech, and abuse or bullying of others are all online harms.
What do platforms and companies do?
Hateful content and online harm both threaten the security and freedom of the Internet ecosystem. These harmful behaviors and content undermine the significant benefits that the digital revolution can bring. So far, some companies have come up with policies and measures to improve the security of their platforms (GOV.UK, 2020).
Here are some of the popular companies’ policies and measures to hate speech and online harms:
– The Policy for hate speech: we believe that when people are not attacked for who they are, they feel freer to use their rights and their voices (Meta, n.d.).
– The Policy for online harm: we will ensure that communities are safe, engaged, supportive, informed and inclusive by creating safe and welcoming communities, maintaining high-quality ecosystems, and actively working with industry (Meta, 2020).
– The overview of hateful conduct:
“You may not directly attack other people on the basis of race, ethnicity, national origin, caste, sexual orientation, gender, gender identity, religious affiliation, age, disability, or serious disease” (Help Center, 2023).
And if users find a violation of this policy, they can report it to Twitter.
Violence, harassment, and other similar actions discourage people from speaking their minds and ultimately undermine the value of global public dialogue. Our rules are designed to ensure that all people can freely and safely participate in public dialogue (Help Center, n.d.).
How about in China?
The Chinese Internet is structured differently from platforms like Facebook and Twitter. First, in the context of Chinese policy and culture, China does not have a significant immigrant population; Second, there is no religious reverence in the mainstream culture. Under the influence of unified teaching materials and educational structure, the education received by multi-ethnic people is basically the same. Meanwhile, since the Han population occupies the majority, the identity factors such as religion, nationality, race, and color are rarely paid attention to in China.
In addition, since COVID-19, the global economic situation has been down, and China is facing the threat of an aging society, so the social atmosphere is more conservative and traditional now. The government is more conservative than it was during the economic upswing of a decade ago. Its “three-child” policy in the face of an aging society has squeezed citizens, especially women. Therefore, in recent years, a wave of feminism has been set off on the Chinese Internet, with more young girls spreading feminism on platforms (mainly via Weibo). In the middle of the patriarchal and new cultures, the Chinese Internet presents a rival situation. Conservative patriarchists and radical feminists often contend with each other, which is almost the status quo that the Chinese Internet cannot be ignored.
Image source: from CNN. A Chinese feminist in New York, is suing Weibo for removing her account.
Therefore, hate speech and online harm mainly occur on gender topics on the Chinese Internet, which has been widespread.
Also, in 2021, the Chinese government launched a nationwide campaign to purify the online environment. But it mainly concentrates on fan groups and the fan culture of stars and influencers. Weibo is the central front of fan culture and groups and the Campaign’s focus. When the Campaign was launched, many people questioned whether it was the “bully the weak and fear the strong” of Internet governance policies. For example, gender violence, which is more difficult to control, has not been specifically addressed.
Image source: REUTERS/Thomas Peter. The photo of Cyberspace Administration of China.
Since this campaign, several major platforms of the Chinese Internet, such as Weibo and Bilibili, have added channels for complaints and handling of cyber violence to respond to the policy. However, the legal restrictions on hate speech still fall short of people’s expectations. Meanwhile, the platform provides “one-size-fits-all” prevention for online harm and hate speech (such as blocking keywords, which cannot be sent regardless of their semantics; e.g., “I hate you” becomes “I * you”) often makes the user feel the contraction of the expressive space. Most of these one-size-fits-all results come from automated machine detection. Still, these platforms also employ staff to manually identify hate speech and online harm, process large amounts of digital material, apply their decision-making processes, and intervene when necessary (Roberts, 2019). This approach to governance concerns users about their freedom of expression, whether machine or human.
Purcotton is a company that develops a line of household products made of 100% cotton, the products contain sanitary napkins, cosmetic cotton, mother & baby products, and so on, and its primary customers are women.
In 2021, Purcotton published an advertisement on its official social media account in which a woman is followed home late at night by a man. The girl takes out cosmetic cotton and then removes her makeup, finally scaring off the gangster. In the video, the gangster can also be heard vomiting after seeing the girl without makeup (China Newsweek, 2021).
Video source: Purcotton Advertisement
Although this ad is not in the way of speech, the video ad also expresses intense misogyny. Therefore, according to the context mentioned above, hate speech should be regulated in conditions:
First, this ad targets women;
Secondly, the expression way of advertising is subordinate behavior, which naturally expresses the appearance shaming and prejudice of women wearing makeup so that the public views women wearing makeup with stereotypes;
Thirdly, it can be thought that behind the creation, there should be a leader from the male gaze who planned this idea. Still, no one in the company believes this is wrong, a kind of systematic discrimination and prejudice.
Purcotton issued two apology statements on Weibo after the incident. Still, the unreasonable content failed to persuade the public to accept the apology. But what role did the platforms play in this?
The launching and removal of advertising videos are all operated by Purcotton itself. No matter on the short video platform or Weibo, there is no expression or action on the advertising content. It looks like a PR flop. Weibo and other platforms act only after individual users have been attacked and reported. Before that, marginalized groups discriminated against or harmed were ignored. There must be an identified perpetrator and victim for them to respond.
Under the influence of the nationwide campaign, online harm governance measures for individual users on the Chinese Internet are gradually being promoted. Weibo, for example, has “community management Conventions” and “anti-cyberbullying advocacy” regularly delivered to every user’s inbox and has optimized and simplified the platform’s report mechanism. In addition, Weibo has dedicated links for users to report and see past cases. Yes, Weibo’s “court” is open and transparent; everyone can act as a jury. For example, if A reports that B has used hate speech or online harm against him, Weibo will ask them to prove it, and then they will be dealt with manually within one to two weeks. If B does make a hate speech, at least the content he/she posted will be deleted. At worst, Weibo will cancel his/her account.
Moreover, Internet courts are being piloted in some major cities in China. As a result, filing lawsuits and documents has become more accessible in Internet courts. It also positively influences the legal governance of online harm and hate speech.
However, these Chinese platforms are often indifferent to online harm and hate speech directed at marginalized groups. Even if a group member complains about the content, the platform may dismiss the complaint because “his content does not offend you personally.” At the same time, the reporting channel is usually only set up inside the platform, and there is almost no corresponding channel for the government or the private sector. On the contrary, Australia has Bullying. No Way! And Kids helpline, a non-governmental anti-online helpline for children and teenagers. Officially, there is also the eSafetyCommissioner for citizens to report online injuries. Combating hate speech online should be done in partnership with businesses, civil society, and government actors involved in Internet governance arrangements (Sinpeng et al., 2021).
Council of Europe. (2002). Recommendation No. R (97) 20 Of the Committee Of Ministers To Member States On “Hate Speech”. https://rm.coe.int/090000168090a6da
GOV.UK. (2020). Online Harms White Paper. https://www.gov.uk/government/consultations/online-harms-white-paper/online-harms-white-paper
Help Center. (2023). Hateful Conduct. https://help.twitter.com/en/rules-and-policies/hateful-conduct-policy
Help Center. (n.d.). The Twitter Rules. https://help.twitter.com/en/rules-and-policies/twitter-rules
Ji, A. B. (2021, January 12). Guanggao Shexian Chouhua Wuru Nvxing, Quanmianshidai Daoqianxin Bian Xuanchuan Wenan [Ads insult and degrade women? Purcotton’s apology letter became propaganda copy]. China Newsweek.
Kids helpline. (n.d.). Online Harassment. https://kidshelpline.com.au/teens/issues/online-harassment
Meta. (n.d.). Hate Speech. https://transparency.fb.com/en-gb/policies/community-standards/hate-speech/
National Collaborating Centre for Determinants of Health. (n.d.). Marginalized populations. https://nccdh.ca/glossary/entry/marginalized-populations
Rights for Peace. (n.d.). What is Hate Speech. https://www.rightsforpeace.org/hate-speech
Sinpeng, A., Martin, F. R., Gelber, K., & Shields, K. (2021). Facebook: Regulating Hate Speech in the Asia Pacific. Department of Media and Communications, The University of Sydney.
Yang, L. (2022, March 3). Marginalization: What It Means and Why It Matters. Fairygodboss. https://fairygodboss.com/career-topics/marginalization?scroll=566
YouTube Creators. (2019, May 24). Hate Speech Policy: YouTube Community Guidelines. [Video]. YouTube. https://www.youtube.com/watch?v=45suVEYFCyc&t=2s
YouTube Creators. (2019, May 2). Harassment & cyberbullying Policy: YouTube. Community Guidelines. [Video]. YouTube. https://www.youtube.com/watch?v=mqG5G26Q0yU