Introduction to hate speech: The Conflict of Stereotypes and Ideologies
Hate speech can be seen as a form of expression comparable to physical harm that hurts people and will be responded to with policies based on the detriment it causes (Koncavar, 2013, p.675). Hate speech may take the form of offensive communication mechanisms with hateful and discriminatory ideologies expressed through stereotypes that also suggest ideological conflicts, including race, gender, and religion. Such expressions may lead to discrimination and exclusion of certain groups in society somehow, or they may promote and maintain the privilege and superiority of certain groups (Koncavar, 2013, p.676).
Delgado and Stefancic (2020) illustrated that hate speech as an overt, conscious, and deliberate context to slander a group or individual, which not only increases the opportunity of inflaming hostility between parties but also has the potential to provoke conflicts of cultural, religious, and faith-based perspectives between groups (Paz et al., 2020, p. 1).
Web: making hate speech easier.
With the continuous development and maturity of information technology, today, social media platforms have become a fast, convenient, and centralized medium of information dissemination for contemporary people, allowing them to spread information to a larger scale of community in a short period of time. Compared to traditional media, online social networking is a unique form of social networking that helps to increase the stickiness and engagement among web citizens worldwide, aiming to create a community based on common interests (Chetty & Alathur, 2018, p. 108).
Due to the features of social media and the minimization of the “gatekeeper” image have indirectly made them a “petri dish” for hate speech and distorted ideological ferment. Currently, more and more extremist groups or individuals are using social media to expand their influence and appeal by posting hate speech, discrimination, and prejudice to “pull” more followers and advocates (Chetty & Alathur, 2018, p.109).
More importantly, through the emerging recommendation mechanism of the AI algorithm, inflammatory hate speech is more easily pushed to groups or communities that share common beliefs and ideologies. By sharing and re-tweeting each other, they internally reinforce kind of views or ideologies that run counter to them.
Simply put, when one segment of the social media platform begins to spread hate speech, others will follow and support it, thus giving it the opportunity to extend and be exposed quickly.
Hate speech on social media is tearing apart gender equality.
According to Amnesty International, 64 percent of female respondents reported that “online hate speech and cyberbullying against groups of women is common,” and 23 percent of those respondents said they had ever experienced torment or harassment online (Dhrodia, 2017).
The data demonstrated that nearly 90 percent of women in the United States (88%), New Zealand (88%), the United Kingdom (90%), Italy (89%), and Spain (89%) expressed that hate speech based on online communication has traumatized groups of women (Dhrodia, 2017).
Naganna Chetty et al. also mentioned that malicious words or actions against an individual over a group’s gender or gender identity (including sexual orientation) could be seen as a form of socially stigmatizing hate speech, with female groups as the primary victims suffering psychological and emotional trauma while it is subtly undermining social inclusion, gender equality, and the development of cultural diversity.
With the growth and popularity of WEB technology and the widespread use of social networks, these advanced innovations are being used to harm women and girls, while online hate speech against gender equality is considered a rampant global problem.
Neglected victims: The LGBTQ community
The nightmare of online hate crimes and speech has further discouraged lesbian, gay, bisexual, and transgender (LGBT) people from fully engaging with society. Numerous EU member states do not track the extent of hate speech and online harm in terms of LGBT people (European Union Agency for Fundamental Rights, 2009, pp.1-2).
According to a GLAAD report on online hate speech and harassment in 2021: LGBTQ groups suffered harassment at a higher rate than victims of online hate speech of all other identity groups, around 64 percent, with little or no improvement from 65 percent in 2020. About 70 percent of LGBTQ respondents have experienced harassment on social media platforms, and 51 percent of them have experienced graver abuse or threats online (Vogels, 2021). Facebook, one of the world’s most dominant social media platforms, is involved in the highest percentage of online hate speech against the LGBTQ community, with 75 percent of victims saying they had experienced it on Facebook, followed by Twitter (24%), YouTube (21%), Instagram (24%) and TikTok (9%) (Glaad, 2021, p. 7).
Such overwhelming numbers reflect the prejudice, non-inclusive, and cultural conflict in the current society against the gay community. More importantly, this phenomenon reflects the neglect of existing social media platforms to regulate online hate speech, which is the source of this tragedy. Also, as the report states: “Millions of online users are experiencing hate and harassment online as a matter of routine, and technology companies are not putting enough resources and effort into addressing the problem, no issue how flashy the excuses they give the public (Glaad, 2021, p. 8).”
Case demonstration: The suicide of James T. Rodemeyer
James T. Rodemeyer was an American high school freshman from Amherst, New York, who spent his life working against homophobia activism and helping victims of homophobic bullying on social media platforms. Unfortunately, Rodemeyer took his life on September 18, 2011, due to the overwhelming and prolonged abuse of homophobic bullying. Because of his sexual orientation, Rodemeyer had been subjected to name-calling and bullying throughout his daily life and online, “JAMES IS STUPID, GAY, FAT, AND UGLY. HE MUST DIE!”, “I do not care if you die. No one will. So just do it 🙂 It will make everyone happier!” such anonymous and hurtful words flood his Formspring account endlessly. Despite this, he continued to share and encourage more discrimination against the gay community on social media platforms, particularly YouTube (Tan, 2011).
Ten years after James’ death, Tracy, James’ mother, said in an interview with Time: “Some days feel like they happened yesterday. It was a decade-long period of anger and grief. When the homophobic person still bullies the LGBTQ community on social media platforms, it is painful for everyone involved, even if they do not end their lives.” “James disappeared at 14 and a half so far, just like he did when he was alive, and nobody possessed the right to let him feel he never belonged here (Garlisle, 2021).”
It is being remedied, but is it really working? (Existing solutions and critical analysis)
Currently, mainstream social media platforms have begun to use relatively mature artificial intelligence and algorithm technologies to assist in the governance of hate speech. For example, Content ID technology uses algorithms to identify and remove infringing content by YouTube, while the AI-based Hash algorithms are used to resist the spread of child pornography (Thiago et al., 2021, p. 701). Social media giants like Twitter and Facebook have started adopting the Natural Language Processing (NPL) tool and sentiment analysis schemes to identify the “toxicity level” in context and automatically delete inflammatory and biased posts (Thiago et al., 2021, p. 702). However, current AI technology still cannot completely eradicate hate speech on social media platforms, especially when natural language processing technology is faced with complex cultural contexts for analysis and filtering. Hate speech often relies on semantic, cultural, and historical backgrounds based on different ideologies and regions, making it difficult for AI technology to digest and identify them accurately. The accuracy of AI filtering tools popular among social media companies such as Facebook and Twitter still hovers between 70% to 80%. In other words, the algorithm will mistakenly remove content that may not contain hate patterns in every four to five filtering tasks while indirectly undermining the diverse nature of the content on social media platforms (Thiago et al., 2021, p. 704). As a remarkable case, in 2017, the judicial branch contacted Facebook claiming that authorities were incommensurately deleting tweets posted by minorities (Levin, 2017). In other words, even if staffs do the primary investigation of content, it can quickly bring errors in the AI filtering tools if there is a bias or miscalculation in the process, leading to a vicious internal circle.
In terms of regulations, the EU Commission has reached an agreement called “Code of Conduct on countering illegal hate speech online” with Facebook, Microsoft, Twitter, and YouTube in 2016 to prevent and combat the spread of illegal hate speech online; any text containing racist or hateful speech on these platforms would be monitored and removed within a short period (European Commission, n.d.).
Governments looking to stem the rise of hate speech are using legislation to indirectly target, including cybercrime laws, telecommunications laws, and safe space policies (Sinpeng et al., 2021, p. 13). In addition, different countries have developed a series of constitutional and criminal law-based rules or proposed regulations to curb hate speech activities. For example, according to Section 230 of the US Communications Decency Act, while social media platforms are not responsible for user content creation, they are still obligated to make reasonable efforts to remove content that could disrupt the community atmosphere, particularly hate speech and content that incites ethnic emotions (O’Hara & Campbell, 2023).
It is worth noting that Facebook’s community standards also explicitly emphasize the protection of gender equality and sexual orientation from hate speech (Sinpeng et al., 2021, p. 13). However, there are still apparent loopholes in specific legal provisions for hate speech. For example, when the authorities amend or draft laws or policies in response to hate speech, they may limit and exploit the features of speech diversity, violating the government’s commitment to freedom of speech. On the other hand, someone believed that governments or institutions might indirectly abuse their power to restrict speech in the governance process in order to suppress the views of opposition parties or dissidents while limiting opportunities for common groups to express their opinions and ideas freely (Schieb & Preuss, 2016, p. 3).
Act as soon as possible (Suggestions)
Firstly, the primary priority of social media platforms is to improve and revise the current community policies and regulations on hate speech, which includes independent audits and achieving research data transparency. They should also work together with stakeholders, particularly the LGBTQ community, to promote ongoing research on tools aimed at combating hate speech and online harm. There is a cross-platform obligation to promote cultural diversity, moderate extremes, and reduce the discrimination of gender/LGBTQ in order to build a favorable online community ecosystem.
As previously noted, social media companies have invested a lot of resources and effort in AI and algorithms to get a taste of marketing, and the adoption of AI technology to maintain the sustainability of social media platforms will be a significant trend in the future. For example, OHI (Online Hate Index), funded by UC Berkeley’s D-Lab, is an outstanding example of a clarifier engine that manually screens racist content for anti-Semitism by inviting Jewish community’s volunteers to an algorithmic tool, aiming to quickly generalize language patterns in the face of large amounts of offensive and hateful speech (Anti-Defamation League, n.d.). Thanks to the OHI, AI tools have the ability to quickly identify text or images of social media content and identify connections to the tags assigned by volunteers. Therefore, a potential possibility is that social media platforms could cooperate with nonprofit organizations in the LGBTQ community and advocacy groups for gender equality, especially Facebook or Twitter, which are relatively overloaded with hate speech related to sexism, while inviting their volunteers to participate in the learning and development process of AI tools, thus minimizing AI’s bias and errors in filtering user-generated contents.
Anti-Defamation League. (n.d.). Online hate index. Anti-Defamation League. https://www.adl.org/online-hate-index-0
Chetty, N., & Alathur, S. (2018). Hate speech review in the context of online social networks. Aggression and Violent Behavior, 40, 108–118. https://doi.org/10.1016/j.avb.2018.05.003
Dhrodia, A. (2017, November 20). Unsocial media: The real toll of online abuse against women. Amnesty Insights. https://medium.com/amnesty-insights/unsocial-media-the-real-toll-of-online-abuse-against-women-37134ddab3f4
European Commission. (n.d.). The EU Code of conduct on countering illegal hate speech online. European Commission. https://commission.europa.eu/strategy-and-policy/policies/justice-and-fundamental-rights/combatting-discrimination/racism-and-xenophobia/eu-code-conduct-countering-illegal-hate-speech-online_en
European Union Agency for Fundamental Rights. (2009). Hate speech and hate crimes against LGBT persons. European Union Agency for Fundamental Rights. https://fra.europa.eu/sites/default/files/fra_uploads/1226-Factsheet-homophobia-hate-speech-crime_EN.pdf
Garlisle, M. (2021, September 17). A decade after Jamey Rodemeyer’s death, his parents are still trying to protect kids from homophobia and bullying. Time. https://time.com/6099086/jamey-rodemeyer-death-parents-interview/
Glaad. (2021). Social media safety index. Glaad. https://www.glaad.org/sites/default/files/images/2021-05/GLAAD%20SOCIAL%20MEDIA%20SAFETY%20INDEX_0.pdf
Koncavar, A. (2013). Hate speech in new media. Academic Journal of Interdisciplinary Studies, 2(8), 675–681. https://doi.org/10.5901/ajis.2013.v2n8p675
O’Hara, K., & Campbell, N. (2023, February 24). What is section 230 and why should i care about it? Internet Society. https://www.internetsociety.org/blog/2023/02/what-is-section-230-and-why-should-i-care-about-it/?gclid=Cj0KCQjwocShBhCOARIsAFVYq0gVLA7sQFUaewS7QUYKRehwtYwMJIJqqph-5w967u1Y9iUX8R9CobQaAvDVEALw_wcB
Paz, M. A., Montero-Díaz, J., & Moreno-Delgado, A. (2020). Hate speech: A systematized review. Sage Open, 10(4), 1–12. https://doi.org/10.1177/2158244020973022
Schieb, C., & Preuss, M. (2016, June). Governing hate speech by means of counterspeech on Facebook. In 66th ICA Annual Conference, at Fukuoka, Japan (pp. 1–23). International Communication Association. https://www.researchgate.net/publication/303497937
Sinpeng, A., Martin, F. R., Gelber, K., & Shields, K. (2021). Facebook: Regulating hate speech in the Asia Pacific [Final Report]. Dept. Media and Communications, The University of Sydney, and The School of Political Science and International Studies, The University of Queensland https://doi.org/10.25910/j09v-sq57
Tan, S. (2011, September 20). Teenager struggled with bullying before taking his life. The Buffalo News. https://web.archive.org/web/20120318231801/https://buffalonews.com/city/schools/article563538.ece
Thiago, D. O., Marcelo, A. D., & Gomes, A. (2021). Fighting hate speech, silencing drag queens? artificial intelligence in content moderation and risks to lgbtq voices online. Sexuality & culture, 25(2), 700–732. https://doi.org/10.1007/s12119-020-09790-w
Vogels, E. A. (2021, January 13). The state of online harassment. Pew Research Center. https://www.pewresearch.org/internet/2021/01/13/the-state-of-online-harassment/