Yehan Sheng — 510004860
The increasing number of attacks on various groups of people and events has raised concerns about the violence and subsequent negative effects of hate speech and online harms posted on the internet. Advances in digital technology have increased the immediacy with which people can communicate and share information with each other, and have facilitated the development of more social media platforms and online communication. However, there has been a corresponding increase in the opportunities for many users to choose to use anonymous or fake nicknames to post online hate speech, resulting in more unique harm and negative consequences (Woods, & Ruscher, 2021, p. 265). Misogyny as one of the targeted topics of hate speech has caused harm to females in many different sectors. Hate messages and incitement to violence are being spread and amplified on social media. There is a great deal of scrutiny and concern about what platform companies and the state are doing to regulate and control hate speech, and they need to protect users who may be exposed to it. This blog focuses on the misogyny aspect of online hate speech and how the state and platforms are addressing this issue.
Online hate speech, which is actively published and disseminated in large numbers through digital platforms and social media, is an issue that deserves close attention by everyone. Appearance, gender, race, ethnicity, sexual orientation, political and religious beliefs are all reasons why contemporary users are subjected to hate speech and harms on the internet. Hate speech means expressing, encouraging and inciting hate against individuals or groups with these specific characteristics or a range of characteristics, which may be published in the form of jokes or satirical comments (Flew, 2021, p. 115). Such intimidation, discrimination and prejudice contribute to the target group’s inability to lead a normal, harassment-free life. Online hate speech causes a variety of harms to the target group, from short-term harms such as increased blood pressure and physical threats, to longer-term harms including depression (Woods, & Ruscher, 2021, p. 265). Even more severe and long-lasting trauma can stimulate suicidal behaviour in victims of online violence.
According to the Pew Research Centre, 41% of US internet users surveyed experienced online harassment in 2017, with 18% experiencing serious harassment (Flew, 2021, p. 115). However, a survey of Americans conducted by the Anti-Defamation League (ADL) for the three years running found that the number of users experiencing online hate speech and harassment has not decreased. In 2020, 44% of respondents to the “Online Hate and Harassment” report said they had experienced online harassment. In 2021, 41% of respondents also reported experiencing online harassment, with 27% of respondents reporting experiencing severe online harassment, including sexual harassment, stalking, physical threats (ADL, 2022).
Social media as a place to commit violence
Many social media platforms, including Facebook, Twitter, YouTube, Instagram and Snapchat, have a large number of active users interacting with each other by posting and sharing content. With the immediacy of posting information and the rapid dissemination of information, platforms allow users to communicate smoothly on social media. Social media also provides violent people with the opportunity to promote their views and incite others to join the ranks of those who commit violence (Laub, 2019).
In traditional forms of media, content is edited and moderated by someone other than the author before it is published, which effectively limits hate speech. Nevertheless, the high speed and quantity of content published on social media platforms nowadays has led to a lack of editorial oversight of some of the content and instant comments, and the mechanisms for regulation are becoming slower (O’Regan and Theil, 2020).
With the growth and variation of hate speech, there has also been an increase in the use of anonymity as a low-cost way of recruiting more ‘like-minded’ people to make targeted hate speech, creating a constant negative and unique impact (Woods, & Ruscher, 2021, p. 266).
The COVID-19 pandemic has also contributed to users spending more time online and has highlighted the problems erupting on social media. The rapid spread of hate speech, harassment and violence through social media has exacerbated the target population, who have been facing severe discrimination, prejudice and conspiracy theories, and have been subjected to increased harassment and pressure.
Hate speech and freedom of speech
Freedom of speech is protected by most national constitutions and major international human rights treaties around the world. Freedom of speech is highly valued because it is recognized as an indispensable manifestation of freedom of thought and progress in human development. Nevertheless, unchecked hate speech, which is further negatively derived, can contribute to mistrust, hostility and strife in society, negating the ideas and dignity of the target group (Flew, 2021, pp. 116-117). Hate speech causes a degree of incitement to harm, which is speech that brings consequences of incitement violence to the target group, which is not allowed to exist in most countries. Some countries do not even prohibit derogatory forms of hate speech because of their strong focus on freedom of expression and democratic discourse on politics and policy (O’Regan and Theil, 2020).
In many countries, regulating hate speech through the legal process is often antithetical to the constitutional power of freedom of expression. Australia, however, is an exception in that its constitution only expresses protection of the freedom of political communication in a limited way (Sinpeng et al., 2021, p. 13).
“Policymakers must ensure that the regulatory regime for social media platforms does not unduly compromise users’ freedom of expression.”
–O’Regan and Theil, 2020
While social media platforms regulate speech through complaint mechanisms and the obligation to eliminate illegal speech, there is also a risk that legitimate speech may be erroneously removed or that users may be prevented from freely expressing themselves online in certain situations. The question of how to properly regulate hate speech while ensuring that users’ freedom of speech is preserved is one that needs to be further examined and discussed in each country in the light of its own thoughts.
Do you know any more about the misogyny?
The emergence of online feminist activism has raised global concerns about the harassment of females online and offline in everyday life. Research studies have found that an increasing number of females are excluded from spaces other than public participation on online platforms, including social media, and that many of them suffer sexual and gender discrimination (Barker, & Jurasz, 2019, p. 95). However, socio-legal structures and policies which do not address all the issues cannot face this situation and cannot develop into platforms for equality, anti-discrimination and gender equality, which perpetuates harassment and discrimination against target groups on the Internet. Until today, women have been subjected to discriminatory and insulting comments by other users who oppose them, and even more serious acts of violence, when they speak out about gender equality on online platforms. These situations of online discrimination and mistreatment of women also contribute to the already existing gender discrimination on a daily basis and affect the right of females to express their views freely.
Andrew Tate, a former taekwondo athlete, was banned from Facebook, Instagram, YouTube and TikTok in 2022 for posing misogyny-related comments. In a video originally uploaded to YouTube, he claimed that women should “stay at home” and that women act as dependents for their male partners. He was banned from Twitter in 2017 for saying that “victims of sexual assault must take some responsibility” and has made many misogynistic comments and viewpoint (Right Response team, 2022). He has a large number of subscribers on various social media platforms, however, and despite his official account being blocked, he continues to generate much discussion and news coverage among social media users, including the #AndrewTate hashtag on TikTok, which has tens of billions of views (Jennings, 2022).
Glass ceiling, as one of the obvious injustices to females, refers specifically to the social barriers in the workplace that prevent female advancement to the top of management. In some societies and workplace norms, stereotypes and discriminatory policies that “assume that top management should be male” create organizational and policy barriers that further reduce women’s self-confidence and make it more difficult to advance to the top (Taparia, & Lenka, 2022, p. 372). Research has found that in 2017 only 19.9% of women were located in board seats in US Fortune 500 companies, and only 5.8% in CEO positions. Only 5% of board members of Norwegian listed limited liability companies were women in 2000, and their annual income was more than 30% lower than their male counterparts on the same board (BERTRAND et al., 2019, p. 192). This stereotype serves as an explicit influence through their backgrounds of education level, age, social class, marital and reproductive status, which define and limit the identity roles of females in families, organisations and society.
Chinese pink-haired girl subjected to net violenced resulted in death by suicide
Earlier this year, Linghua Zheng, a 23-year-old pink-haired Chinese girl who suffered from depression as a result of online hate speech, died of suicide after suffering from internal stress. In July last year, she shared the happy news of her postgraduate offer to her grandfather at his hospital bedside and posted this information on social media platforms with a picture, not expecting to receive bad news after this and to start suffering from online violence. Many netizens and inaccurate news began to report and create disinformation news: “escort girl”, “fake education”, “red-haired monster”, “the old man took a postgraduate course and married a young girl”, these false reports were read by millions (Lu, 2023). After six months of online violence, Linghua suffered from depression, sleeping and eating disorders, and was even hospitalized once. Even in the process she struggled to maintain her healthy state of mind, sharing her life and studies, posting her progress in defending her rights, and keeping an active diary of her fight against depression after being hospitalized for it (Teller Report, 2023). However, the fight against online violence is not an easy one to win.
Online hate speech is becoming more prevalent in an increasingly digital society, further leading to more serious consequences. Strengthening laws and regulations, online platforms and monitoring hate speech on websites are all important measures that need to be taken to help control this problem, to promote a culture of tolerance and respect.
National considerations in hate speech governance
Hate speech legislation in the Asia-Pacific region combines constitutional, criminal and civil law. For a variety of reasons, most approaches to regulating hate speech have been through indirectly targeting cyber-criminality laws, telecommunication, electronic information and individual country-specific laws and regulations (Sinpeng et al. 2021, p. 13). For example, in the Philippines, the Safe Spaces Act 2018 has provisions to regulate gender and sexual orientation intimidating hate speech. In addition to each country’s independent constitutional provisions and regulatory policies, five countries, including Australia and the Philippines, have signed international laws relating to human rights to help further prevent the spread of hate speech in various regions (Sinpeng et al., 2021, p. 13-14). However, each country focuses on different aspects, with the US having a constitution that explicitly prohibits government and public authorities from restricting freedom of expression, with exceptions for only some serious violent hate speech. The UK has enacted a series of explicit bans on online and in print hate speech related to race, religion and sexual orientation (O’Regan and Theil, 2020).
Governance of social media platforms
Social media platforms, as the largest providers of information creation and dissemination, also need to be responsible for shaping a more peaceful and friendly online environment. Facebook has built its content regulation policy by drawing on concepts and tools from the US legal system and constitution, which focuses on three areas: public and content policy, engineering and product, and global operations (see figure 5) ( Sinpeng et al., 2021, p. 18). The UK’s White Paper helps and advises social media platforms to establish regulators to ensure that their major companies comply with their obligations to regulate online media, and the European Union has signed a Code of Conduct on Countering Illegal Content Online with Facebook, Twitter, Youtube, Instagram, Microsoft, Snapchat, Google+ and Daily to help further regulate the responsibility of social media platforms to regulate hate speech (O’Regan and Theil, 2020).
Major social media companies are also taking measures against hate speech, which is largely self-policing. For instance, Facebook community standards explicitly state that hate speech will be removed from Facebook and that people who incite attacks and discrimination against these protected groups will not be allowed to appear on Facebook. The terms of service of YouTube also mention that hate speech-related content is not allowed, and Twitter has bans on “specific and direct threats of violence against others” (Alkiviadou, 2019, p. 24). In addition to normal manual review, Twitter adds an intermittent feature to help alert targeted users to reject potentially sensitive information, and the platform also uses the flagging of users’ tweets as a method to help review hate speech. Reddit implemented a ‘quarantine function’ in 2015, which allows the platform to quarantine content and communities. If a publisher wants to republish the message, they need to be vetted by a manual administrator on the platform and republished after a successful appeal (Ullmann, & Tomalin, 2020, p. 72).
However, the YouTube regulations are primarily aimed at prohibiting speech that attacks or degrades a group of people, but what about individuals who are specifically part of a group? While violent speech is not allowed on Twitter, there is no explicit regulation of hate speech and it is not possible to fully restrict hate speech except in the case of very direct and specific threats of violence. Social media self-regulation treaties are not as well regulated due to a number of factors and regulatory conflicts that need to be taken into account to improve them. This is a priority that needs to be further updated and developed that works to accomplish the need for better regulation and review of the content of the platforms.
Perfectly regulating hate speech remains a challenge, given its conflict with some of the views of “free speech”, the different cultural backgrounds and social contexts of individual countries, and the different laws, regulations and rules they have developed and apply. These are all aspects that need to be taken into account to facilitate the governance of hate speech. Social media platforms also need to keep up to date with new legislative initiatives and policies in each country to assess how to better regulate online hate speech, balancing freedom of expression with the prohibition of hate speech. To achieve this, social media should be open to researchers and the general public about the content they remove (O’Regan and Theil, 2020), and regulatory guidance on penalty notices and an easy to follow appeals process should be automatically recommended to all users. There is also a need to further balance automated and manual initiatives in the approach to platform auditing, and to train and educate social media community managers and platform auditors to improve their auditing skills and professionalism (Sinpeng et al., 2021, p. 2). Policy formulation and vetting needs to be extended to protect more target groups, otherwise there will be more serious consequences in terms of hate speech for other vulnerable groups that are not set up for specific protection. Platforms could also provide regular audit advisory forums to facilitate discussions between target groups and platform administrators on practical approaches to audit management (Sinpeng et al., 2021, p. 2).
This blog discusses hate speech and online harms through a literature survey, focusing on the glass ceiling and the Chinese pink-haired girl as two cases that illustrate the harm caused by hate speech in the context of misogyny. Hate speech is largely responsible for multiple violent psychological and physical harms to the target group, in some cases with indelible damage and consequences. The government and social media platforms need to take this issue and situation seriously and further develop more detailed and targeted measures and policies to help manage and address this issue. Public users also need to abide by the laws and rules governing their own hate speech-related behaviour and help report the appearance of hate speech by other users.
- ADL. (2022). Online Hate and Harassment: The American Experience 2021. ADL. https://www.adl.org/resources/report/online-hate-and-harassment-american-experience-2021
- Alkiviadou, N. (2019). Hate speech on social media networks: towards a regulatory framework? Information & Communications Technology Law, 28(1), 19–35. https://doi.org/10.1080/13600834.2018.1494417
- Barker, K., & Jurasz, O. (2019). Online misogyny. Journal of International Affairs, 72(2), 95-114. https://www.jstor.org/stable/26760834
- BERTRAND, M., BLACK, S. E., JENSEN, S., & LLERAS-MUNEY, A. (2019). Breaking the Glass Ceiling? The Effect of Board Quotas on Female Labour Market Outcomes in Norway. The Review of Economic Studies, 86(1 (306)), 191–239. https://doi.org/10.1093/restud/rdy032
- Flew, T. (2021). Hate Speech and Online Abuse. Regulating Platforms, 115-118.
- Hänel, L. (2022). Germany’s battle against online hate speech [image]. DW. https://www.dw.com/en/germanys-battle-against-online-hate-speech/a-60613294
- Jennings, M. (2022). Controversial internet personality Andrew Tate banned from TikTok, Instagram and YouTube. ABC NEWS. https://abcnews.go.com/International/controversial-internet-personality-andrew-tate-banned-tiktok-instagram/story?id=88794004
- Laub, Z. (2019). Hate Speech on Social Media: Global Comparisons. COUNCIL on FOREIGN RELATIONS. https://www.cfr.org/backgrounder/hate-speech-social-media-global-comparisons#chapter-title-0-3
- Lu, F. (2023). ‘Pink hair prostitute’ taunts drive woman, 23, to suicide following 6 months of online abuse; millions in China mourn death. South China Morning Post. https://www.scmp.com/news/people-culture/trending-china/article/3215846/she-runs-30-pigs-man-trends-china-after-revealing-live-girlfriend-4-years-disappeared-livestock?module=perpetual_scroll_2&pgtype=article&campaign=3215846
- O’Regan, C., & Theil, S. (2020). Hate speech regulation on social media: An intractable contemporary challenge. Research OUTREACH. https://researchoutreach.org/articles/hate-speech-regulation-social-media-intractable-contemporary-challenge/
- Right Response team. (2022). ACT NOW: Tech platforms must act against dangerous misogynist Andrew Tate. HOPE not hate. https://hopenothate.org.uk/2022/08/19/act-andrew-tate/
- Sinpeng, A., Martin, F., Gelber, K., & Shields, K. (2021). Facebook: Regulating Hate Speech in the Asia Pacific. Facebook Content Policy Research on Social Media Award: Regulating Hate Speech in the Asia Pacific, 1-47. https://hdl.handle.net/2123/25116.3
- Taparia, M., & Lenka, U. (2022). An integrated conceptual framework of the glass ceiling effect. Journal of Organizational Effectiveness : People and Performance, 9(3), 372–400. https://doi.org/10.1108/JOEPP-06-2020-0098
- Teller Report. (2023). Hangzhou “pink-haired girl” passed away, how hard is it to find out who is behind the Internet violence? Teller Report. https://www.tellerreport.com/life/2023-02-22-hangzhou-%22pink-haired-girl%22-passed-away–how-hard-is-it-to-find-out-who-is-behind-the-internet-violence-.HyWaPSyECi.html
- Ullmann, S., & Tomalin, M. (2020). Quarantining online hate speech: technical and ethical perspectives. Ethics and Information Technology, 22, 69-80. https://link.springer.com/article/10.1007/s10676-019-09516-z
- Woods, F. A., & Ruscher, J. B. (2021). Viral sticks, virtual stones: addressing anonymous hate speech online. Patterns of Prejudice, 55(3), 265–289. https://doi.org/10.1080/0031322X.2021.1968586