In the modern 21st Century that we live in, none of us can ever conceive of a world without the internet and social media. We live in a globally connected world where with one tap we can reach someone living on the other side of the world. While this is a privilege that modern technological upgradation and globalization have given us, this does not come without certain limitations like easy access to the internet actually leading to increase in the number of hate speeches and online harms on social media (Flew, 2021). Online harm may take the garb of any number of breaches, this may be in the form of online hate speeches decrying or shaming a community, a gender, or a culture; or this may appear more insidiously by means of stalking, cyberbullying, etc. (eSafety Commissioner, 2022). Therefore, legislations like the Online Safety Act 2021 in Australia are needed to help expand the protection net against online harm and other toxic forms of abuse experienced through digital platforms. There is a further need for content moderation which should be undertaken by governmental agencies or by big tech companies themselves so as to protect the rights and privileges of all concerned (Roberts, 2019). This implies the need for content moderation that needs to be taken up by social media companies so as to protect individuals from facing online hate and abuse. The aim of this blog post, therefore, is to talk about hate speech and the various forms of online abuse that exist or are practiced through social media on certain sections of the population, and also to explicitly state the need for content moderation of all digital platforms. The blog post critically analyses relevant case studies on harmful online behavior that has brought out the need for controlling the free agency of digital tech giants by means of bringing in better legislation or forcing them to upgrade their content moderation techniques. The thesis statement, therefore, shows that hate speech and social media violence is ruinous for the well-being of society and hence everything should be done to curb this form of abuse by the government, civil society as well as media conglomerates.
“Platformed Racism” as an aggregator promoting hate speech and online harms
The problem of online harm and hate speech makes social media a nightmare for those who are at the receiving end of such abusive behaviors. Hate speech is an act of subjugation that is a direct result of the structural inequality existing in society (Sinpeng et al., 2021). While social media has become a vital means of communication worldwide, there has been a simultaneous rise in incidences of hate speeches linked with online misinformation, and fake news. This then takes the form of persecution of minorities, racial, gender, migrants and the poor (Sinpeng et al., 2021). This implies that even though social media has helped us better communicate with individuals living far and wide, it has similarly created a greater fissure by means of propagating online hate crimes.
Figure 1.0: Hate speech and Online Harms as being ruinous towards the well-being of individuals.
(Source: O’ Regan & Theil, 2020)
Furthermore, the digital economy is ruled by a few major tech giants who help propagate social inequalities by failing to curb variations of ‘platformed racism’ that are quite common in every social media platform (Matamoros-Fernández, 2017).
Figure 2.0: Social media platforms promoting hate speech and online harms
- In Matamoros-Fernandez’s article, “Platformed racism”, the mediation and circulation of an Australian race-based controversy on Twitter, Facebook and YouTube”, she has given us an example of indigenous player Adam Goodes, who is an Australian Football league star and his experience of hate speech faced through social media on the grounds of racism.
- This further gave rise to the controversy which brought out the entangled nature of users being able to use their rights to privacy and thereby disguising and amplifying racist humor and sectarian behavior on one hand; and on the other hand, the use of algorithms by the platform owners to reproduce overt and covert forms of hate speeches (Matamoros-Fernández, 2017). This challenges the disguise of neutrality put on by social media owners.
- The Online Safety Act of 2021 as brought out in Australia mandates to use both algorithms and human content moderators to modify and limit hate speeches and online bullying. Thus, you can rightly say redressal and protection of the rights of people is only possible through changes in digital law and legislations.
What kind of opinion do you all have about the idea of platformed racism and how is a contributing factor towards rising hate speeches and online harms?
Hate Speech and Online Violence promoting inter-community violence
There is a definite connection between online harassment and violence and social or community-based violence. In their article titled “Social Media Mob: Being Indigenous Online”, Carlson and Frazer (2018) show that social media has never been a neutral place for cultural, religious, or ethnic minorities. Facebook has the capacity to amplify certain situations negatively and has incited people to break out into fights outside the space of the online community (Carlson and Frazer, 2018). This shows that Facebook and other digital media platforms instead of curbing online hate speeches have instead helped spread the culture of hate so much so that the violence no longer remained limited to the online space, instead it broke across communities physically as well.
- A Facebook community page for residents of a place called Kalgoorli in Western Australia spread the news that a 14year old aboriginal boy was hit and killed intentionally by a non-indigenous driver. This gave rise to intense violence across the communities and as such the situation went out of control for a while, what with rising incidences of hate speech.
Figure 3.0: Racial fault lines at Kalgoorli gets further exacerbated by hate speeches and harmful online behavior broadcasted through social media
(Source: Wahlquist, 2016)
- The much-debated Online safety bill in the United Kingdom, likewise addresses similar issues where racist or sexist content which falls under “legal but harmful” content comes under a censor’s charter. This implies that while racism or sexist content online is not made a criminal offense, tech giants still have to take measures to follow a path of content moderation. Users are also given certain privileges to ban an account that promotes hate online and this to an extent can help give agency to the victims of abuse (Milmo, 2023). Thus, online violence or hate speeches by themselves are not just limited to the virtual space, they have the capacity to bring harm within the physical sphere as well.
Do you agree that online hate speeches and violence can and do result in violence outside the purview of social media platforms?
The Gendered nature of Hate speech and Online harms
One other problematic area that needs to be focused on while dealing with online hate speeches and violence is that of gender or the existence of sexist content that is widely produced and spread throughout all social media platforms.
Figure 4.0: Impact of hate speeches and online harms on women
(Source: UNDP, n.d.)
- Massanari in her article, “#Gamergate and The Fappening: How Reddit’s algorithm, governance, and culture support toxic technocultures”, has spoken about the misogynistic activism which is so popular in the social media forum Reddit. This implies that by following a misogynistic form of platform politics, Reddit has made itself a safe space for views and opinions which are at complete odds with the doctrine of equality and might even go so far as to demean women and sexually target them. Thus, it is only by means of bringing about legislative changes, that such social media platforms can ever be made liable for the wrongs that it helps propagate as well as bind them to follow a content moderating censor charter (Milmo, 2023).
Figure 5.0: Hate speeches and online harms and their consequences
(Source: Council of Europe, 2014)
- Therefore gender-based violence that finds expression through hate speech and online harmful behavior in social media is a structural culmination of our hierarchically ordered society and any relief can come about by means of actions taken by the government and civil society to bring about policy changes targeted toward social media platforms.
What other impacts do you think the gendered nature of hate speeches and online harms has on those they are aimed at?
Hate Speech and Online harms: A cross sectional study which spans gender, race and ethnicity
Women suffer quite a significant amount of cyber violence and hate, and this has partly to do with their gender. Other elements count as well such as racial identity, ethnicity, and social class to name but a few (Lineham, 2022). A critical analysis of women belonging to ethnically or religiously backward communities or tribes is taken to show how media-approved online violence is a direct assault on basic human rights.
- Research conducted by eSafety in 2021 found that Aboriginal Torres Strait Islander women felt threatened and abused by the vulgar, insulting and hate comments usually posted publicly on Facebook groups and pages, resulting in online harm. They claimed that this created in them a fear that this sexual assault which they have to deal with every day on social media might one day culminate in direct physical assault. This case study shows how Aboriginal women live in a constant state of tension fearing violation.
- Additionally, another example can be stated that shows how religion can become another element through which women are targeted in social media. New South Wales Greens Senator Mehreen Faruqi has likewise has faced several incidences of online abuse since becoming Australia’s first female Muslim senator (Lineham, 2022). She claimed this had to do with how she presented herself and since she came from a marginalized religious community, she has to face this abuse on such a scale with little hope for redressal.
Figure 6.0: The cultural, racial and gendered aspect of hate speeches and online harms needs to stop.
(Source: Griffin, 2018)
- Legislations have been made and updated time and again however despite this, five years ago, Amnesty International submitted a report to the UN stating the need for content moderators to be trained in correctly identifying and dealing with gender and identity-related abuse on social media platforms (Lineham, 2022). Lineham (2022) has stated that only through transparency, accountability, and sufficient resource allocation can social media companies reach a solution when it comes to dealing with online hate speeches and violence.
- The Australian government by bringing about the upgradation of its existing online safety bill 2021 has aimed to hold online social media companies accountable for any abusive content posted through their platforms (eSafety Commissioner, 2022). It is therefore through such measures that both users and companies can be made answerable to the wrongs perpetuated by them and provide a safer digital space for people to participate in.
Fig 7.0: Creating a safe space in social media free from hate speeches and online harms
(Source: United Nations, 2022)
The aim of this post, therefore, was to discuss about hate speech and the various forms of online abuse that exist or are practiced through social media on certain sections of the population, and also to explicitly state the need for content moderation of all digital platforms. The thesis statement, therefore, showed how hate speeches and harmful online behavior is ruinous for the well-being of society and hence what all should be done to curb this form of abuse by the government, civil society as well as media conglomerates. The blog post found that hate speeches and abuses online help perpetuate the culture of subjugation and this is done because of the biased attitudes of the platform owners themselves as well as the privacy rights protection they give to the users who reproduce such filth. The blog post also found that online violence or hate speeches by themselves are not just limited to the virtual space, they have the capacity to bring harm within the physical sphere as well. Finally, the post showed how gender-based violence in social media is a structural culmination of our hierarchically ordered society. The blog post used to link these findings with a currently relevant case study and critically analysed it to bring out the final notion that redressal is only possible through legislation and nations like Australia for example are endeavoring to bring policy changes in their digital law about hate speeches and online harms so as to better protect the rights and privileges of the people. This will in effect force the tech companies to reorient their position and force them to comply with the norms censoring harmful content.
Share all the responses of the questions through the contact below.
Carlson, B., & Frazer, R. (2018). Social Media Mob: Being Indigenous Online. https://research-management.mq.edu.au/ws/portalfiles/portal/85013179/MQU_SocialMediaMob_report_Carlson_Frazer.pdf
eSafety Commissioner. (2022). Learn about the Online Safety Act | eSafety Commissioner. ESafety Commissioner. https://www.esafety.gov.au/newsroom/whats-on/online-safety-act
Flew, T. (2021). Regulating Platforms. (pp. 91–96). Cambridge: Polity.
Lineham, I. (2022, June 17). Online abuse against women is rife, but some women suffer more. Women in Media. https://www.womeninmedia.com.au/post/online-abuse-against-women-is-rife-but-some-women-suffer-more
Massanari, A. (2016). #Gamergate and The Fappening: How Reddit’s algorithm, governance, and culture support toxic technocultures. New Media & Society, 19(3), 329–346. https://doi.org/10.1177/1461444815608807
Matamoros-Fernández, A. (2017). Platformed racism: the mediation and circulation of an Australian race-based controversy on Twitter, Facebook and YouTube. Information, Communication & Society, 20(6), 930–946. https://doi.org/10.1080/1369118x.2017.1293130
Milmo, D. (2023, January 17). TechScape: Finally, the UK’s online safety bill gets its day in parliament – here’s what you need to know. The Guardian. https://www.theguardian.com/technology/2023/jan/17/online-safety-bill-meta-pinterest-snap-molly-russell
Roberts, S. T. (2019). Behind the Screen: Content Moderation in the Shadows of Social Media. In JSTOR. Yale University Press. https://www.jstor.org/stable/j.ctvhrcz0v
Sinpeng, A., Martin, F., Gelber, K., & Shields, K. (2021). Facebook: Regulating Hate Speech in the Asia Pacific (pp. 1–47). https://r2pasiapacific.org/files/7099/2021_Facebook_hate_speech_Asia_report.pdf