From GamerGate to online harms: The Challenge of regulating social media platforms

Source: Adobe Stock


In the era of rapid development of the Internet, emerging network social platforms with high openness have attracted a large number of people from around the world to join. And all kinds of information and speech have shown rapid growth. Individuals or groups with ulterior motives also take advantage of the characteristics of network social platforms, such as strong information interaction and fast transmission speed to spread a large number of hate speech about gender or individuals. As a result, discrimination, violent attacks and even hate crimes have been generated online and even offline. Therefore, the responsibility of platform companies in regulating online speech and how to balance the interests of freedom of speech with the prevention of online harm have become highly controversial issues. This article will analyze the definitions and impact of hate speech and Internet harm, the case of GamerGate, as well as platform moderation and corporate responsibility.

What is Hate Speech and online harm and its impact?

Hate speech on the Internet has been defined in detail and strictly, and different speech forms may be classified as hate speech in specific contexts. It usually refers to a type of speech that is harmful enough to incite hatred against individuals or groups with specific characteristics, such as race, gender, politics or marginalized groups such as people with disabilities (Flew, 2021; Sinpeng, Martin, Gelber & Shields, 2021). It also has an exclusionary and violent character (Parekh, 2012). This not only fosters distrust and hostility in society but can even threaten social stability and create substantial online harm (Flew, 2021).

Source: Council of Europe (2018)

There is a certain contradiction between users’ freedom of speech rights on social media platforms and their content, and the causes of hate speech and network harm are related to it to some extent. Hate speech is considered regulated in international human rights law and in the laws of some liberal democracies (Sinpeng et al., 2021). Although the public has the legitimate rights and interests of freedom of speech on social media platforms, it is difficult to regulate and control hate speech and the online harm it brings due to the lack of certain normative constraints on the subject of speech (Parekh, 2012; Sinpeng et al., 2021). And prominent social media platforms like Twitter and Facebook have come under fire for not doing enough to police and remove hate speech and other harmful content.

In addition, online harassment caused by hate speech can have a profound negative impact on those targeted. Hate speech not only stigmatizes the target group on social media to a certain extent but also makes them targets of hostility due to its negative nature (Parekh, 2012). Statistics show that more than 40 per cent of social media users experienced online harm on the platform in 2017, and 18 per cent reported serious cases of persistent harassment, threats or stalking (Flew, 2021). Parekh (2012) also showed that hate speech increases prejudice, discrimination and intimidation, and the personal dignity and life of the target group are seriously disturbed. Online hate even can spill over into offline harm (Sinpeng et al., 2021). Such as problems with family and friends due to online harassment. And as with face-to-face harassment, there are sustained and long-term negative effects on the physical and mental health of the recipient (Jhaver, Ghoshal, Bruckman & Gilbert, 2018). According to Jhaver et al. (2018), 56% of people will suffer psychological problems due to online harm. Such as depression, serious emotional problems and, in extreme cases, suicide. According to 2017 data from the Pew Research Center, fear of online harassment and hate speech has caused about 43 per cent of victimized users to change their personal information to escape online harm (Jhaver et al., 2018).

Cyber harm in social media platforms

Social media is the most common platform space where online harassment occurs, and online harm can be further spread or amplified in cyberspace and social space (Chatzakou, Kourtellis, Blackburn, De Cristofaro, Stringhini & Vakali, 2017). And regulation and abuse in social media are increasingly pressing issues that produce cyberbullies and online harassers. They usually benefit in three ways. The first is anonymity in social media (Richard & Couchot-schiex, 2020). Anonymous or fake identities allow the abuser to hide behind the platform without even being directly affected by the incident (Richard & Couchot-schiex, 2020). The second is the uncontrollability of hate speech and Internet harm (Richard & Couchot-schiex, 2020). It also has to do with the nature of social media platforms, as disengagement makes it possible for bullies to evade the control of those involved. Finally, there is the powerful transmission power of digital devices (Richard & Couchot-schiex, 2020). It allows such incidents to spread rapidly and can affect large numbers of people either positively or negatively, inciting even greater hatred and harm (Richard & Couchot-schiex, 2020). Therefore, social media has contributed to the complexity and severity of hate speech and online harm. Social media platforms should also be held responsible for the negative content and behaviour that result.

In addition, online harm and hate speech on social media can easily target some specific groups. Jhaver et al. (2018) proposed that female groups, ethnic minorities and young people are most vulnerable to such harm. Moreover, double standards and prejudices against women, as well as the spread of social norms in cyberspace, have been proven to amplify existing stereotypes of femininity and gender (Richard & Couchot-schiex, 2020). Women are also under intense gaze on social networking platforms (Penny, 2013). This also makes women more vulnerable to gender-based discrimination and harassment on social media platforms. Online sexism is a form of hate speech or online abuse based on the gender of an individual or group (Penny, 2013). It aims to establish and enforce male-dominated gender norms online, and to keep women’s online power and opportunities limited and even allowed to rule them out (Penny, 2013; Richard & Couchot-schiex, 2020). Women who are active online can also attract rabble haters or worse for a variety of reasons (Penny, 2013). Thus, this series of negative effects highlights the ongoing dilemma women face when it comes to using social networks.

Exploring the Case of GamerGate on social media platforms

The 2014 GamerGate controversy, which raged on Twitter and other social media platforms, was a major case of women being subjected to mass hate speech and attacks online. Quinn, an American game developer, was unfairly accused by her ex-boyfriend Gjoni of paying for sex with a game journalist who wrote a positive review of her designed game (Massanari, 2017). With information about their intimate connections and supposed evidence such as nude photos (Alves-Pinto, 2014). However, the message was quickly circulated on Twitter by some gamers with the hashtag of support and condemnation of journalistic ethics (Massanari, 2017). They also used Twitter and other male allies to focus their harassment on Brianna, another game developer who spoke out, and Sarkeesian, a critic of sexism in games (Alves-Pinto, 2014). They received about 100,000 tweets, some of the tweets criminal in nature (Alves-Pinto, 2014; Massanari, 2017). “GamerGate” as the primary hashtag has become a rallying cry for those who want to harass women in the game industry (Massanari, 2017). Tagging them with relevant tags also exposes them to wider online harm. They were threatened with death and rape several times, and their private information was exposed online, prompting them to leave their homes (Alves-Pinto, 2014; Flew, 2021). This has led to an environment of hate in game forums on media platforms like Twitter.

Source: Statista (2022)

GamerGate began as a moral slur against women in the video game industry (Chatzakou, Leontiadis, Blackburn, Cristofaro, Stringhini, Vakali & Kourtellis, 2019). But it quickly spread to other parts of the Internet through platforms, and then grew into a larger movement based on sexism, feminism and even sociopolitical dimensions (Chatzakou et al., 2019). And eventually not just online bullying and harassment, but offline outright threats of violence, rape or murder (Chatzakou et al., 2019). All this suggests that social platforms played an important role in spreading hate speech and online harassment during the GamerGate controversy.

Critics of the incident argue that social media platforms like Twitter and Reddit have not done enough to eliminate hate speech and harm on their platforms. And that this lack has helped create a toxic gaming environment that is hostile to women. Reddit’s administrators claim that it is a neutral and impartial forum for discussion and that individuals are responsible for their actions, so they are reluctant to intervene in controversial content in an effective way (Massanari, 2017). The platform’s governance methods protect the rights of a certain number of people while also helping to perpetuate a toxic technology culture (Massanari, 2017). And while some users have reported abusive tweets, Twitter’s response has been slow or ineffective. One possibility is that users of hate speech may not be disproportionately suspended by Twitter, or that the platform is more inclined to suspend such accounts than to delete them (Chatzakou et al., 2017). Users whose accounts are suspended may express and spread more offensive language and negative posts with exclusionary emotions (Chatzakou et al., 2017). And because some victims have relatively small social self-networks, their self-networks cannot help them cope with harassment and victimization on the platform. And that creates the presence of sustained damage (Chatzakou et al., 2017).

Based on the analysis of this case, the need for Twitter and other social media companies to moderate and regulate hate speech and online harmful content on their platforms has been made clearer.

Platform moderation, corporate responsibility and its challenges

Moderation rules and regulations have always existed on the platform to some extent. However, social media platforms are still controversial for not doing enough to harm the Internet and suppressing freedom of speech. That’s because hate speech and online harassment are multifaceted problems with no easy solutions. Therefore, the platform needs to adopt stricter platform governance measures to deal with the problem. Platform governance can influence individual behaviour, and only by exerting the cooperative responsibility of platform companies, governments and users can we better control online injuries (Gorwa, 2019). Jhaver et al., (2018) proposed that limiting and deleting users’ online abuse content on the platform is also an important mechanism for maintaining the availability of online space. Social media companies should take steps to educate users about proper online behaviour. In addition, according to the study of DeCook, Cotter, Kanthawala & Foyle (2022), no platform has a clear definition of injury, or the description of injury is very narrow. For example, Twitter only provides a case of definition related to physical and psychological injury (DeCook et al., 2022). But the network harm is in this particular harm type and content constitute (DeCook et al., 2022). The way a platform defines and resolves injuries affect how users perceive the platform (DeCook et al., 2022). Therefore, platform governance requires careful and flexible browsing of important terms associated with defining these harms. For example, Twitter later developed a way to identify, censor and control hate speech content by quantifying and classifying injuries and violence by severity (DeCook et al., 2022).

In terms of corporate responsibility, platform companies not only need to take responsibility for harm that occurs on social media platforms like Twitter but also need to take action to prevent similar incidents or further injuries in the future. For example, developing better and more rigorous technical tools and improving corporate guidance and management policies (DeCook et al., 2022; Gillespie, 2020). To cope with public scrutiny, it is also necessary to use appropriate technologies to deal with this serious problem, such as relying on artificial intelligence and automated identification as one of the solutions, using them to identify hate and harassment content and put it into restricted blocklists (Gillespie, 2020; Jhaver et al., 2018). However, there are limitations to using such techniques for identification and review. There may be a mismatch between the mechanical identification and emotional experience of the platform, and even unfair review and processing for the interests of the enterprise (Gillespie, 2020). Therefore, how to further optimize the audit and governance of the platform still needs further exploration.


In conclusion, through the analysis of hate speech and network harm in social media platforms, combined with the analysis of GamerGate, we have learned that social media platforms can amplify the harm caused by it, and have a deep understanding of the serious consequences brought by such incidents to users and platforms. Therefore, it is necessary to balance freedom of speech and prevention of Internet harm. Social media platforms such as Twitter need to take corporate responsibility and be accountable to users who have been harmed, as well as punish those who have broken the rules. They also need to work with governments and users to develop appropriate platform governance on a case-by-case basis to regulate, regulate and control hate speech and online harm on platforms. All these are conducive to building a more inclusive and respectful online environment.


Alves- Pinto, T. (2014, November 19). ‘GamerGate’ and Gendered Hate Speech. Oxford Human Rights Hub.

Chatzakou, D., Leontiadis, I., Blackburn, J., Cristofaro, E., Stringhini, G., Vakali, A., & Kourtellis, N. (2019). Detecting Cyberbullying and Cyberaggression in Social Media. ACM Transactions on the Web, 13(3), 1–51.

Chatzakou, D., Kourtellis, N., Blackburn, J., De Cristofaro, E., Stringhini, G., & Vakali, A. (2017). Hate is not Binary: Studying Abusive Behavior of #GamerGate on Twitter. Proceedings of the 28th ACM Conference on Hypertext and Social Media, 65–74. 721

DeCook, J. R., Cotter, K., Kanthawala, S., & Foyle, K. (2022). Safe from “harm”: The governance of violence by platforms. Policy and Internet, 14(1), 63–78.

Flew, T. (2021). Regulating platforms. Polity Press.

Gorwa, R. (2019). What is platform governance? Information, Communication & Society, 22(6), 854–871.

Gillespie, T. (2020). Content moderation, AI, and the question of scale. Big Data & Society, 7(2), 205395172094323–.

Jhaver, S., Ghoshal, S., Bruckman, A., & Gilbert, E. (2018). Online Harassment and Content Moderation: The Case of Blocklists. ACM Transactions on Computer-Human Interaction, 25(2), 1–33.

Massanari, A. (2017). Gamergate and The Fappening: How Reddit’s algorithm, governance, and culture support toxic technocultures. New Media & Society, 19(3), 329–346.

Parekh, B. (2012). Is There a Case for Banning Hate Speech? In The Content and Context of Hate Speech (pp. 37–56). Cambridge University Press.

Penny, L. (2013). Cybersexism: sex, gender and power on the internet. Bloomsbury Publishing plc.

Richard, G., & Couchot-schiex, S. (2020). Cybersexism: How Gender and Sexuality Are at Play in Cyberspace. In Gender, Sexuality and Race in the Digital Age (pp. 17–30). Springer International Publishing.

Sinpeng, A., Martin, F. R., Gelber, K., & Shields, K. (2021). Facebook: Regulating Hate Speech in the Asia Pacific. Department of Media and Communications, The University of Sydney.

Be the first to comment

Leave a Reply