No matter when, where and how you use the internet and platforms such as Twitter, Reddit, or Facebook, there always seems to be like some form of unpleasantness, suspicious or outright dangerous events transpiring. These could range from hateful speech to harassment or scams. Some call for platforms to be more accountable in the moderation of their services whilst others may argue the opposite, claiming for the internet to be a free space. This argument stems from the origins of the Internet, seen as a separate entity from the physical world, a free space where anything could go and thus, one could express freely as per libertarian values (Barlow, 1996). These types of insinuations and what transpires on the internet sometimes makes me want to say:

Futurama – A Clockwork Origin (2010)
In this post, we will be talking about the about a particular issue of concern on the internet: hate speech and online harms.
Hate speech refers to offensive discourse targeting groups or individuals based on inherent characteristics (e.g., race, religion, gender, etc) and having the potential to threaten the “social order” (United Nations, 2021). Online harms are trickier to define as there are various forms of practices scale from borderline legal (e.g., harassment, spam) to actual harmful conducts (death threats, doxing, “swatting”). Taking notes from the Australian Government and United Kingdom Parliament, online harms are activities that occur wholly/partially online that can damage an individual in ways either “legal” or illegal (Australian Government, 2022; United Kingdom Parliament, 2021).
As these sites of the internet becomes quasi-breeding grounds of vitriol, hate speech and various online harms, how these issues are governed become an interest for various parties involved, including users, governments, and the platforms themselves to an extent.
This post will be broken into two sections:
- What are these practices and why they happen with a focus on #GamerGate?
- What form of governance exists currently exist and how are they enforced?
What? Why?
As previously defined, hate speech is essentially offensive speech targeting specific groups or individuals based on their inherent characteristics, whether it is their gender, sexual orientation, race and so on. Online harms are more open-ended in definition but is generally associated with practices such as scams, doxing, and harassment.
These practices in their numerous forms, are indicative of the community themselves whether it is their ideals, personality, or their projection of their perceived self onto others (Massanari, 2017; Carlson & Frazer, 2018). These incidents of discrimination against members of the target group are often interlinked with misinformation and/or extremist viewpoints representative of the cultural landscape at the time (Sinpeng, 2021). Some cases of online harms also demonstrate a users’ technological expertise on both the platform and the internet itself (Massanari, 2017).
Before we continue, let us talk about the vitriol-filled controversy known as #Gamergate (#GG). Whilst the term was coined later in the controversy (Katawashonen, 2014), the #GG controversy began in 2014 when Eron Gjoni wrote a blog entry dubbed “The Zoe post”. The post contained details on why he broke up with his ex-girlfriend Zoe Quinn, an independent game developer. Eron falsely claimed Zoe exchanged sexual favours with a game journalist for positive reviews for her game “Depression Quest” (Lewis, 2015). The post quickly spread across all sectors of the internet, where a particular subset of users cites the “Zoe Post” to justify anti-feminist, sexist rhetoric and targeted attacks including but not limited to death threats and hateful speech against Zoe herself, numerous female game developers and prominent figures in the game industry (Auerbarch, 2014; Stuart, 2014). The #GG controversy saw instances of “swatting”, a form of doxing that involves placing hoax distress calls to police departments to dispatch an armed unit to a specific address with one journalist seeing his home raided by armed forces (Hern, 2015).
The #GG controversy demonstrates the extreme lengths specific netizens will go to express their opinions on a particular subject but why resort to such matters? Carlson & Frazer (2018) note that whilst the internet is an opportunity for those to express their personality, it is also a force that can amplify racist and any other rhetoric. Some may resort to such practices as a community may feel as if they are marginal or threatened (Massanari, 2017). In the case of #GG, a “marginalised” group was condemned by the mass media, this was interpreted as the community being under attack by the dominant culture and thus felt the need to “bear arms”. Many users still operate under the modus that the internet is a “free space” (Barlow, 1996), so when something is expressed under the guise of a rhetoric of “freedom” anything going against that rhetoric can be seen as an infringement on free speech. Thus, one may feel empowered and emboldened to engage in such rhetoric and go to such extreme practices to express their worldviews and justify it as the only correct perspective.
However, we also need to consider the fact that harassment on the internet can transpire just because “why not?”. For example, Hollywood Actor Shia Labeouf hosted a livestream protesting then President Donald Trump which was then hijacked by 4Chans users. This involved livestreaming a flag in the air with the caption “he will not divide us” (Figure 1) as a political art piece. Online trolls then utilised all the tools on the internet was capable of. This involved cross sectioning the time of the livestream with when sunset and evening began on the live stream and across the United States, investigating recorded flight records as a jet stream was identifiable on air, and even utilising social media to track the actor’s whereabouts. The online trolls successfully stole the flag within 37 hours (Internet Historian, 2017). When the trolls were asked why they targeted Shia Labeouf and his political piece, no motive, agenda, or even political stance was given but rather, “f### with Shia Labeouf” (Lamoureux, 2017).

Figure 1 – He Will Not Divide Us Flag
How does one approach these issues?
So, now that we have examined one of the many reasons why both hate speech and online harms may occur, let us ask ourselves how can platforms such as Twitter and Facebook govern these issues and whether they should be held responsible or not?
There has been calls for platforms such as Reddit, Twitter, and Facebook to be held responsible and to be more transparent about their moderation process. One can say that platforms are responsible as they create the algorithms that push content into the hands of the user and in turn, these issues of concern. In the case of #GG, critics called for reddit’s “Karma System” to be investigated, believing that the algorithm had no checks in place for the event when a controversial topic such as #GG appeared on the front page, adding more fuel to the fire as the system is based on upvotes.
Prior to major changes in their policy, Reddit was perceived to have “laissez-faire” policies on moderation, with communities being moderated by volunteers of the community themselves, often resulting in little to no governance of an already weak policy (Massanari, 2017: Stephan; 2020). Ironically, the strongest forms of moderation surrounding #GG came from other communities themselves, imposing strict self-moderation on the topic, resulting in either locked or deleted threads (Katawashounen, 2014) and you know things are bad when 4Chan, a place often cited as the darkest corner of the internet, decided to practice self-moderation.
So how can various platform on the internet provide governance?
One common form of governance is laying in policies outright condemning both online harms and hate speech. These policies hate speech and “online harm” differ slightly between platforms but tend to share similar framing and language. Implemented policies will have provisions prohibiting hate speech and specific practices such as doxing in addition to a clause protecting vulnerable groups and individuals based on inherent characteristics. For example, Reddit’s (2021) policy states as follows:
- Promoting hate based on identity or vulnerability”
- “Marginalized or vulnerable groups include, but are not limited to, groups based on their actual and perceived race, colour, religion, national origin, ethnicity, immigration status, gender, gender identity, sexual orientation, pregnancy, or disability….”
- “…it does not protect those who promote attacks of hate or who try to hide their hate in bad faith claims of discrimination.”
A common criticism associated with this is that finding a platforms term of services on specific policies can be difficult to find, requiring knowledge on the platform itself and finding specific terms in a sea of technical jargon (Matamoros-Fernández, 2017).
With these policies in place, what tools are used? Depending on the resources available to the platform, some form of content moderation is used to examine flagged content. There are many different methodologies various platforms will utilise (Figure 2). One popular form of moderation across the tech sector is the hiring of employees or outsource contractor reviewers (or even volunteer moderators) to examine reported violations of online harms and hate speech (Roberts, 2019; Woods, 2021). Professional moderators are ideally experts in the culture of the sites they moderate as well as the sites audience and the location of platforms main demographics. This work can be distressing as employees are required to shift through distressing content (Roberts, 2019; Woods, 2021). Other platforms may opt to utilise Artificial Intelligence with tools like automated text, and audio material searches when specific terms or banned topics are mentioned or various other tools such as a skin filter to enforce existing policies (Woods, 2021).

Figure 2 – Breakdown of Social Media Platforms and which policies are enforced in their terms of services (Stephan, 2020).
So, these are the tools and ways platforms will govern their ecosystem and whilst some will express a rhetoric of neutrality, they must “intervene” in the public discourse (Matamoros-Fernádez, 2017). It is important to add that whilst important to intervene, it can be said that they do so not out of the goodwill of their company but to maintain their brand as a “safe environment” for advertisers to feel comfortable associating their brand with the platform.
Figure 3 demonstrate how Metas’ Facebook approaches content moderation, where economic circumstances, stakeholders such as customers and advertisers and the regulatory framework they operate under all intersect with one another.

Figure 3 – Economic, stakeholder and regulatory factors cross-effect on moderation (Sinpeng, 2021)
Whilst the moderation of a platforms content is a vital process necessary for combating hate speech and various online harms, I must ponder about how effective these processes are, especially with those who practice such issues are quite creative and always innovating in practicing what they excel in.
So…. What Now?
The issues of concerns talked about in this post; hate speech and online harms are issues that have plagued many aspects of the internet and numerous platforms for many years. Whilst incidents such as #Gamergate may not have resulted in major policy changes in both Reddit and other platforms, it did at least serve as a catalyst for the discussions for platforms to be more responsible and transparent about their moderation and algorithms. As time passes on, so too does the review and development of governance of platforms in relation to the policies that they enforce. Despite these changes, hate speech and numerous online harms are still prevalent in the internet landscape today, whether it is still the same as it was a decade ago but just in a different platform or becoming more sophisticated utilising bots to do their work. So, do I still want to live on this planet? I guess that depends on what the state of the internet will be in time.
References
Auerbach, D. (2014, September 4). Gaming Journalism is Over. Slate. https://slate.com/technology/2014/09/gamergate-explodes-gaming-journalists-declare-the-gamers-are-over-but-they-are.html
Australian Government. (2022). Online Harms and Safety. https://www.internationalcybertech.gov.au/our-work/security/online-harms-safety
Barlow (1996) A declaration of the independence of cyberspace. https://www.eff.org/cyberspace-independence.
Carlson & Frazer. (2018). Social Media Mob: Being Indigenous Online. Macquarie University. https://research-management.mq.edu.au/ws/portalfiles/portal/85013179/MQU_SocialMediaMob_report_Carlson_Frazer.pdf
Groening, M. (Writer) & Carey-Hill, D. (Director). (2010, August 12). A Clockwork Origin (Season 7, Episode 9) [Television series episode], Futurama. 20th Century Fox Television, Comedy Central.
Hern, A. (2015, January 14). Gamergate hits new low with attempts to send Swat teams to critics. The Guardian. https://www.theguardian.com/technology/2015/jan/13/gamergate-hits-new-low-with-attempts-to-send-swat-teams-to-critics
Internet Historian. (2017, March 16). Capture the Flag | He Will Not Divide Us [Video]. YouTube. https://www.youtube.com/watch?v=vw9zyxm860Q
Katawashounen. (2014). What is GamerGate? [Online forum post]. https://www.reddit.com/r/OutOfTheLoop/comments/2fgfpa/what_is_gamergate/
Massanari. (2017). Gamergate and The Fappening: How Reddit’s algorithm, governance, and culture support toxic technocultures. New Media & Society, 19(3), 329–346. https://doi.org/10.1177/1461444815608807
Matamoros-Fernández. (2017). Platformed racism: the mediation and circulation of an Australian race-based controversy on Twitter, Facebook and YouTube. Information, Communication & Society, 20(6), 930–946. https://doi.org/10.1080/1369118X.2017.1293130
Lamoureux, M. (2017, March 12). How 4Chan’s Worst Trolls Pulled Off the Heist of the Century. Vice, https://www.vice.com/en/article/d7eddj/4chan-does-first-good-thing-pulls-off-the-heist-of-the-century1
Lewis, H. (2015, January 11). Gamergate: a brief history of a computer-age war. The Guardian. https://www.theguardian.com/technology/2015/jan/11/gamergate-a-brief-history-of-a-computer-age-war
Roberts. (2019). Behind the Screen: Content Moderation in the Shadows of Social Media . Yale University Press. https://doi.org/10.12987/9780300245318
Sinpeng. (2021). Facebook: Regulating Hate Speech in the Asia Pacific. Department of Media and Communications, The University of Sydney. https://hdl.handle.net/2123/25116.
Stephan, S. (2020, July 8). Comparing Platform Hate Speech Policies: Reddit’s Inevitable Evolution. Stanford Internet Observatory. https://cyber.fsi.stanford.edu/io/news/reddit-hate-speech
Stuart, K. (2014, September 3). Gamergate: the community is eating itself but there should be room for all. The Guardian. https://www.theguardian.com/technology/2014/sep/03/gamergate-corruption-games-anita-sarkeesian-zoe-quinn
United Nations (2021). What is Hate Speech? https://www.un.org/en/hate-speech/understanding-hate-speech/what-is-hate-speech
Woods. (2021). Obliging Platforms to Accept a Duty of Care. In Martin Moore and Damian Tambini (Ed.), Regulating big tech: Policy responses to digital dominance (pp. 93–109).
Be the first to comment