Understanding Hate Speech and Online Harms
What is hate speech?
“any kind of communication in speech, writing or behaviour, that attacks or uses pejorative or discriminatory language with reference to a person or a group on the basis of who they are, in other words, based on their religion, ethnicity, nationality, race, colour, descent, gender or other identity factor.”United Nations (2021)
Hate speech encompasses language or content that promotes or incites violence, hatred, or discrimination against individuals or groups based on attributes such as race, religion, gender, or sexual orientation. Online harms encompass a wide range of abusive behaviors, including cyberbullying, harassment, and the dissemination of harmful or offensive material. The effects of online hate can be far-reaching and devastating for its targets, resulting in mental health issues, self-harm, or even suicide. Moreover, it can reinforce and perpetuate existing social divisions, fostering an environment of intolerance and animosity.
Online Harassment and Cyberstalking of Women in Gaming
The developments in gaming technology and the accessibility of many games, the activity has become very popular among both genders and young and old alike. Despite its popularity, the anonymous nature of online gaming leaves some gamers open to harassment and negative experiences. During a July 2022 survey in the United States, 47 percent of female gamers stated that they had suffered abused while online gaming based on their gender.
Most common groups experiencing identity-based harassment while playing online games in the United States as of July 2022
Online harassment affects men and women differently, with men being more likely to report at least one type of harassment compared to women. Specifically, 44% of online men have reported experiencing some form of harassment compared to 37% of online women. Men are more likely to experience less severe types of harassment such as name-calling and embarrassment, as well as receive physical threats online. On the other hand, women, particularly those in the 18-24 age group, are more likely to experience severe forms of harassment, such as stalking and sexual harassment, with 26% reporting being stalked and 25% being sexually harassed. The prevalence of severe harassment among young women is also noteworthy for women only a few years older, aged 25-29. Moreover, young women are also subject to higher rates of physical threats and sustained harassment, similar to young men and young people in general.
One of the most egregious forms of online harassment faced by women in gaming is cyberstalking. Cyberstalking is a type of online harassment that involves the use of the internet and other forms of electronic communication to harass and intimidate someone. It can take many forms, including sending threatening messages, hacking into someone’s social media accounts, and sharing personal information online. Women in gaming have been particularly vulnerable to cyberstalking. The anonymity of the internet, combined with the male-dominated nature of the gaming community, has created an environment in which women are often subjected to a barrage of sexist, racist, and homophobic comments. This has a chilling effect on their participation in these spaces, which can have long-term consequences for their career prospects and personal well-being.
One example of the pervasive nature of online harassment and cyberstalking is the story of Anita Sarkeesian, a feminist media critic and creator of the YouTube series “Tropes vs. Women in Video Games.” In May 2012, she launched a Kickstarter project called Trope VS Women, in which she said she would create a video series analyzing gender inequality in games. Later, Anita collected images of women in popular games such as Grand Theft Auto, Assassin’s Creed and Far Cry in her online videos, and concluded that some of them were implicitly devalued, many were overly sexualized and often subjected to extreme violence — and that players were sometimes rewarded when they were killed. Anita believes that many games should be designed to be more thoughtful and appropriate, rather than relying on female characters to cater to the player’s sensory stimuli. However, her comments have also drawn the ire of many gamers, who have called her an overheated, self-hyping liar and made numerous threats to her, including a malicious game that allows users to click on her photo to reveal bruises and welts caused by beatings.
Sarkeesian’s experience is not unique. Many other women in gaming have faced similar forms of harassment and abuse, often for simply speaking out about issues related to gender and representation in games. The problem is so widespread that the gaming industry has been forced to take notice and make some attempts at addressing it. One company that has taken a proactive stance on this issue is Riot Games, the developer of the popular game “League of Legends.” In 2018, Riot Games was accused of fostering a toxic work environment that included sexism, harassment, and discrimination against women. In response, the company launched a series of initiatives aimed at addressing these issues and creating a more inclusive culture.
One of these initiatives was the establishment of the Riot Games Social Impact Fund, which provides financial support to organizations that work to promote diversity, inclusion, and equality in gaming. The company has also implemented a number of internal policies and training programs to address issues related to harassment and discrimination, and has made efforts to increase the representation of women and other underrepresented groups in its workforce. Despite these efforts, however, the problem of online harassment and cyberstalking in gaming remains a significant concern. Women continue to face abuse and intimidation online, often with little to no recourse. The gaming industry as a whole must continue to work towards creating a safer and more inclusive environment for all gamers, and address the root causes of these issues.
The Challenge of Regulating Hate Speech
One of the most significant hurdles in regulating hate speech online is determining what qualifies as harmful or hateful content. Flew (2021) argues that the line between free speech and hate speech is often blurred, making it difficult for social media platforms to strike the right balance. Additionally, cultural and regional differences further complicate the issue, as what may be considered hateful in one country might not be seen as offensive in another (Sinpeng et al., 2021).
Addressing hate speech and online harms is a complex task, with several key challenges and criticisms:
- Balancing free speech and safety: Stricter content moderation policies can limit free speech, making it difficult to strike a balance between preserving freedom of expression and ensuring user safety.
- Inconsistencies in content moderation: Social media companies have been criticized for their inconsistent enforcement of policies, leading to instances where harmful content remains online, while innocuous content is removed.
- Scale and speed: With billions of users and vast amounts of content posted daily, detecting and removing harmful content in a timely manner remains an immense challenge.
- International coordination: Differing legal frameworks and cultural norms around hate speech and online harms make it difficult to create a unified approach to combating this issue.
Addressing the Issue
To address the issue of hate speech and online harms, tech companies and governments must work together to develop effective policies and regulations. Tech companies must take a proactive approach to remove harmful content from their platforms and provide users with tools to report incidents of hate speech and harassment. Governments must develop regulations that hold tech companies accountable for their actions and provide users with avenues for redress.
In Australia, the Online Safety Act 2021 is an example of a regulatory framework that aims to address the issue of online harms. The act establishes a statutory duty of care for tech companies, which requires them to take reasonable steps to ensure the safety of their users. It also creates a framework for the removal of harmful content and the resolution of complaints. Similarly, the UK’s Online Safety Bill, which is still under consideration in Parliament, aims to establish a duty of care for tech companies and create a regulatory framework for addressing online harms.
Potential Solutions and Future Directions
Addressing hate speech and online harms requires a multi-faceted approach. Here are some potential solutions that could help make the internet a safer and more inclusive space:
1. Stricter enforcement of platform policies: Social media platforms must enforce their existing policies more effectively, investing in better content moderation systems and dedicating resources to addressing harmful content (Roberts, 2019).
2. Cross-platform collaboration: Platforms should work together to share information, best practices, and technology in the fight against online hate speech (Matamoros-Fernández, 2017).
3. Legislation and regulation: Governments should consider enacting laws and regulations that hold platforms accountable for the content they host, such as Australia’s Online Safety Act 2021 and the UK’s Online Safety Bill.
4. Education and awareness: Public awareness campaigns and educational initiatives can help users become more informed and responsible digital citizens, reducing the spread of harmful content.
Hate speech and online harms are complex issues that require a multi-faceted approach to address. The experiences of women in gaming highlight the intersectionality of hate speech and online harms and the need for tech companies and governments to take action to create a safe and inclusive online environment. The development of effective policies and regulations, coupled with proactive enforcement and user education, is key to addressing these issues and ensuring that the internet remains a space for free expression and healthy discourse.
The fight against hate speech and online harms is an ongoing battle, and while social media platforms and governments have made strides in addressing this issue, there is still much work to be done. As the internet continues to evolve, so too must our efforts to mitigate the harms that can arise from its use.As users of social media, we must also take responsibility for the content we share and engage with. By fostering open and respectful dialogue and reporting harmful content when we encounter it, we can contribute to creating a safer and more inclusive online environment for all.
ANTÓNIO, G, (2021) United Nations Secretary-General.
Burrell, J. (2019, October 16). Riot Games Social Impact Fund. https://www.riotgames.com/en/news/riot-gamessocial-impact-fund
Duggan, M. (2014, October 30). 5 facts about online harassment. Pew Research Center: Fact Tank.
Feminist Frequency. (2015, August 31). Women as Reward – Tropes vs Women in Video Games [Video]. YouTube.
Flew, Terry (2021) Regulating Platforms. Cambridge: Polity, pp. 91-96.
Massanari, Adrienne (2017) #Gamergate and The Fappening: How Reddit’s algorithm, governance, and culture support toxic technocultures. New Media & Society, 19(3): 329–346.
Roberts, Sarah T. (2019) Behind the Screen: Content Moderation in the Shadows of Social Media. New Haven, CT: Yale University Press, pp. 33-72.
Sinpeng, A., Martin, F., Gelber, K., & Shields, K. (2021, July 5). Facebook: Regulating hate speech in the Asia Pacific. Final Report to Facebook under the auspices of its Content Policy Research on Social Media Platforms Award. Dept of Media and Communication, University of Sydney and School of Political Science and International Studies, University of Queensland.
https://r2pasiapacific.org/files/7099/2021_Facebook_hate_speech_Asia_report.pdfLinks to an external site.
Statista. (2021). Share of gamers who have experienced harassment in video games worldwide as of July 2021, by identity. https://www.statista.com/statistics/1133194/harassment-video-games-identity/