Dog whistles, implications and the lols: The challenges of moderating hate speech

Free expression: Where do we draw the line? 

In the 1996 ‘Declaration of the Independence of Cyberspace‘, John Perry Barlow of the Electronic Frontiers Association announced: 

“We are creating a world where anyone, anywhere may express his or her beliefs, no matter how singular, without fear of being coerced into silence or conformity.”

This quote is famous for many reasons, not least because it captures the sentiment of utopian hope and possibility that was pinned on the rapidly evolving social internet of the 90s. It also points to one of the classic conundrums of internet governance – whether the principle of free speech is a defining characteristic in organising digital spaces. 

While this question is still far from resolved, it is rare to find an absolute commitment to free speech online in the present day. There might be companies, organisations and public figures that still espouse a belief in absolute free speech, in most contexts it is accepted that there will be legal and cultural limitations on what we consider acceptable speech (Sorial 2022). A commonly agreed threshold for what is unacceptable (and where legal or private policy intervention is appropriate) is ‘hate speech’. 

Defining hate speech 

Still, while it might be agreed that hate speech should not be allowed – what actually constitutes hate speech is not always clear cut and can differ based on your geographic location, your political ideologies, and interpretation of key terms in hate speech definitions. In a 2021 research report examining Facebook’s moderation practices related to hate speech in Asia Pacific, Aim Sinpeng highlights a key challenge in moderating hate speech is that it is so often ‘language and context dependent’ (2021 p.1). This makes regulation and moderation of online hate speech areas fraught with challenges. 

These challenges don’t just arise from the moral and ethical complexities involved in determining the boundaries of speech regulation but also from practical questions of how to determine when language tips into identifiable hate speech. 

Grey areas requiring careful analysis of context could be anything from tweets with thinly veiled slurs (or, alternatively, reclaimed terminology being used by a protected category), Instagram posts that claim to be ‘anti-jihadi’ rather than islamophobic, a Facebook comment about women that is ‘just a joke’ or a TikTok video that is ‘engaging in debate’ about the welfare programmes for protected categories.

The content in these examples could constitute hate speech. They could also not. It really depends and that’s where things get tricky. Interpreting the sentiment and potential ramifications of hate speech can be incredibly difficult when hateful language is not explicitly outlined. 

So how do Australia’s legal frameworks grapple with free expression and hate speech? 

On the topic of explicit outlines, it is worth considering what Australian laws have to say about the limits of free expression and definitions of hate speech. 

Unlike many liberal democracies, Australia does not have a formal protection for freedom of  expression in its constitution. It instead has an ‘implied right to political communication’ (Attorney General’s Department 2023). Outside of constitutional protection, Australia is also a party to the United Nations’ International Covenant on Civil and Political Rights (ICCPR), in which the right to freedom of opinion and expression is outlined.  

The Attorney General’s Department outlines that a key limitation of free expression in Australia relates to its obligation to ‘outlaw vilification of persons on national, racial or religious grounds’ (2023). This is reflected in the definitions of hate speech determined under anti-discrimination laws, including the Racial Discrimination Act 1975 and Criminal Code Act 1995 at a federal level. Notably, Australia’s federal discrimination laws focus on race and ethnic origin and do not cover hate speech related to other protected categories such as sexuality, gender identity, and disability. 

The Racial Discrimination Act 1975  outlines important defining factors for understanding how hate speech is outlawed at a national level. It states that:

It is unlawful for a person to do an act, otherwise than in private, if:
(a) the act is reasonably likely in all the circumstances to offend,
insult, humiliate or intimidate another person or group of people,
and
(b) the act is done because of the race, colour or national or ethnic
origin of the other person or some or all of the people in the group. 

The Act itself does not outline specific contextual considerations for speech in online versus offline contexts. This is mainly regarded as being addressed in the section of the Criminal Code Act 1995 that outlines it is an offence to use the ‘internet intentionally to disseminate material that results in a person being menaced or harassed’. Under this legal framework, threat or harassment must be present. It can not be used to prosecute material on the basis that it causes offence.

Figure 1 – Australian Human Rights Commission 

Hate speech is also considered across a variety of state laws including: New South Wales’ Crimes Act 1900 and Anti-Discrimination Act 1977; Australian Capital Territory’s Discrimination Act 1991; Northern Territory’s Anti-Discrimination Act 1992, Queensland’s Anti-Discrimination Act 1991, South Australia’s Racial Vilification Act 1996, Western Australia’s Criminal Code Amendment (Racial Vilification) Act 2004, Victoria’s Racial and Religious Tolerance Act 2001, and Tasmania’s Anti-Discrimination Act 1998. These laws largely focus on racial and religious categories although they do not have consistent definitions or relate to a consistent list of outlined activities. 

The Australian Human Rights Commission breaks down how this patchwork of federal and state based laws translate to content hosted online. It also highlights the process for lodging a complaint with the Commission for violations of the Act and that, to date, it has received relatively few complaints related to internet content. 

Amidst this crowded landscape of laws, it’s important that we consider a key fact – that they (and the complaints system related to them) deal with a relatively narrow definition of what contexts constitute hate speech and where it can be prosecuted. 

So how do platforms grapple with hate speech? 

Digital platforms also hold their own definitions of hate speech and hateful conduct that are outlined in platform policy documents. In many cases, these policies include definitions of hate speech that extend beyond what is considered in Australian anti-discrimination laws.   

For example, Twitter and Facebook have specific policies related to hate speech. These outline categories of speech that incite hatred or violence against protected categories. 

Twitter’s hateful conduct policy (2023)

The policy includes the protected categories it relates to, as well as its specific inclusion of dehumanisation, stating: 

You may not directly attack other people on the basis of race, ethnicity, national origin, caste, sexual orientation, gender, gender identity, religious affiliation, age, disability, or serious disease.

We prohibit the dehumanisation of a group of people based on their religion, caste, age, disability, serious disease, national origin, race, ethnicity, gender, gender identity, or sexual orientation.

Facebook’s hate speech policy (2023)

Similarly, the policy outlines the categories it relates to, as well as some of the kinds of speech it would consider violative.

We define hate speech as a direct attack against people – rather than concepts or institutions – on the basis of what we call protected characteristics: race, ethnicity, national origin, disability, religious affiliation, caste, sexual orientation, sex, gender identity and serious disease. 

We define attacks as violent or dehumanising speech, harmful stereotypes, statements of inferiority, expressions of contempt, disgust or dismissal, cursing and calls for exclusion or segregation.

While these policies might have expansive and admirable definitions outlined, enforcement is another story. There have been frequent criticisms that the way these policies are interpreted and applied is opaque and subject to error (Suzor 2019; Sinpeng et al. 2021). 

Even if these policies were evenly applied, some critics do not believe they adequately address or capture the potential harms of online hate speech. Sinpeng et al (2021) highlight that existing definitions of hate speech do not necessarily capture all of the experiences that those belonging to protected categories have in relation to hateful content.

Representatives of protected categories have advocated for these definitions to be broadened in order to capture harm created by language targeted at groups, rather than specific individuals. Beyond this, some advocacy organisations have argued that definitions of dehumanisation need to be updated to reflect implied meaning, as well as explicit. For example, in its 2021 submission to Australia’s Inquiry into Social Media and Online Safety, the Australian Muslim Advocacy network included reference to how ‘great replacement theory’ has the potential to incite violence against muslims, although the content where it is discussed rarely targets an individual. It cites the inclusion of ‘great replacement theory’ in the manifesto of the Australian terrorist responsible for the 2019 Christchurch shootings as a demonstration of the real world harm that dehumanising language can create despite never being directed at a specific target.   

Some approaches suggest dealing with these complexities by simply finding other moderation paths, rather than attempting to resolve the overarching question of whether speech should be removed from the internet. Under the new ownership of Elon Musk, Twitter’s moderation policy has shifted to what it calls ‘freedom of speech, not freedom of reach’. This relates to an approach of moderating content by deamplifying it, where content is not algorithmically promoted or engagement actions such as resharing are disabled. 

Case study: Is J.K Rowling transphobic? 

Whether or not violence or hatred can be incited through implication is a question that has been brought into sharp focus by the comments of famous author, J.K Rowling in relation to transgender identity and rights. 

The language in the tweets explicitly states support for trans people and it’s unlikely it would be interpreted as explicit incitement of hatred or violence towards those belonging to the category. However, the debate around these Tweets has largely centred around the question of whether the implication that biological sex is essential and non-negotiable is an invalidation of trans identity and therefore could be considered hate speech. 

Politically contested topics lend themselves to debates around free expression and the limitations of hate speech. While the Australian Human Rights Commission outlines that transgenderism is a protected category under Australian discrimination laws, these protections do not stretch to vilification and definition of trans identity is currently a politically and socially divisive topic in Australia. ABC news recently covered how this topic is garnering controversial public debate, advocacy, and protest (Karvelas 2023). These debates themselves could be interpreted as a harmful and hateful experience for those belonging to the protected category. 

Looking ahead – A timely national conversation 

As the dynamics of online communities and interactions continue to evolve, so does our understanding and interpretation of hate speech in online contexts. Incitement of violence towards a protected category in the context of online speech is potentially very different to the offline dynamics that existing discrimination laws intended to cover when they were developed.  

The Guardian recently reported that as we approach the referendum on an Indigenous Voice to Parliament, there is an increasing discussion on the role that hate speech (Butler 2023) will play in the national conversation surrounding the event. This points to wider questions that must be considered about how online hate speech interventions relate to deeper social problems (Flew 2021) and the role they play in a broader conversation about systemic issues of racism and the aftermath of a nation’s colonial past (Carlson & Frazer 2018). 

As Australia’s Albanese government considers updates to the national legal frameworks that could deal with hate speech (Minister for Communications 2023), it’s essential that these definitional challenges, and the social contexts they arise from, are carefully considered in the reviews. 

Resolving (and potentially updating) the legal boundaries of hate speech content is imperative to ensure clear and enforceable speech regulation in future. 

Reference list

Anti-discrimination Act 1977, https://legislation.nsw.gov.au/view/html/inforce/current/act-1977-048

Anti-discrimination Act 1991, https://www.legislation.qld.gov.au/view/html/inforce/current/act-1991-085

Anti-discrimination Act 1992, https://legislation.nt.gov.au/Legislation/ANTIDISCRIMINATION-ACT-1992

Anti-Discrimination Act 1998, https://www.legislation.tas.gov.au/view/html/inforce/current/act-1998-046

Attorney General’s Department. (2023). The right to freedom of opinion and expression, https://www.ag.gov.au/rights-and-protections/human-rights-and-anti-discrimination/human-rights-scrutiny/public-sector-guidance-sheets/right-freedom-opinion-and-expression

Australian Human Rights Commission. (2023). Racial Vilification Law Australia, https://humanrights.gov.au/our-work/racial-vilification-law-australia

Australian Human Rights Commission.( 2023). Transgender, https://humanrights.gov.au/quick-guide/12104#:~:text=People%20who%20are%20transgender%20are,related%20characteristics%20of%20the%20person.

Barlow, JP. (1996). Declaration of the Independence of Cyberspace, Electronic Frontier Association, https://www.eff.org/cyberspace-independence#:~:text=We%20must%20declare%20our%20virtual,of%20the%20Mind%20in%20Cyberspace.

Butler, J. (2023 March 29). Government puts social media giants on notice over misinformation and hate speech during voice referendum, The Guardian, https://www.theguardian.com/australia-news/2023/mar/29/government-puts-social-media-giants-on-notice-over-misinformation-and-hate-speech-during-voice-referendum

Carlson, B., Frazer, R. (2018). Social Media Mob: Being Indigenous Online, Macquarie University, https://research-management.mq.edu.au/ws/portalfiles/portal/85013179/MQU_SocialMediaMob_report_Carlson_Frazer.pdf

Crimes Act 1900, https://legislation.nsw.gov.au/view/html/inforce/current/act-1900-040

Criminal Code Act 1995, https://www.legislation.gov.au/Details/C2021C00183

Criminal Code Amendment (Racial Vilification) Act 2004, https://www.legislation.wa.gov.au/legislation/statutes.nsf/RedirectURL?OpenAgent&query=mrdoc_4849.doc

Discrimination Act 1991, https://www.legislation.act.gov.au/a/1991-81

International Covenant on Civil and Political Rights 1976, https://www.ohchr.org/en/instruments-mechanisms/instruments/international-covenant-civil-and-political-rights

Karvelas, P. (2023, March 26), Kellie-Jay Keen-Minshull’s anti-trans rights campaign has become a headache for the Liberal Party. But the issue runs deeper than one MP, ABC News, https://www.abc.net.au/news/2023-03-26/kellie-jay-keen-minshullanti-trans-rights-liberal-party-debate/102142130

Racial and Religious Tolerance Act 2001, https://content.legislation.vic.gov.au/sites/default/files/3b0e389a-7fec-36a3-b4fe-f2f33595ec0c_01-47aa011%20authorised.pdf 

Racial Discrimination Act 1975, https://www.legislation.gov.au/Details/C2016C00089

Racial Vilification Act 1996, https://www.legislation.sa.gov.au/__legislation/lz/c/a/racial%20vilification%20act%201996/current/1996.92.auth.pdf 

Sinpeng, A., Martin, F., Gelber, K., Shields, K. (2021). Facebook Content Policy Research on Social Media Award: Regulating Hate Speech in the Asia Pacific, The University of Sydney, Sydney, https://ses.library.usyd.edu.au/bitstream/handle/2123/25116.3/Facebook_hate_speech_Asia_report_final_5July2021.pdf?sequence%3D3%26isAllowed%3Dy&sa=D&source=docs&ust=1681646158835952&usg=AOvVaw2MIDqbs4KQA4gMH8jcFduv

Australian Muslim Advocacy Network, 2021, Submission to Inquiry in Social Media and Online Safety, http://www.aman.net.au/wp-content/uploads/2022/06/Sub03-Australian-Muslim-Advocacy-Network.pdf

Meta Transparency Centre. (2023). Hate Speech, https://transparency.fb.com/en-gb/policies/community-standards/hate-speech/

Minister for Communications. (2023, April 5). Government empowering Australians through a holistic approach to online safety [Press release]. https://minister.infrastructure.gov.au/rowland/media-release/government-empowering-australians-through-holistic-approach-online-safety

Sorial, S. (2022). International Comparative Approaches to Free Speech and Open Inquiry (FSOI)., ed. Sheahan, LC & Lukianoff, G, Springer International Publishing AG, Cham.

Suzor, N. (2019). Lawless : the secret rules that govern our digital lives, Cambridge University Press, Cambridge.

Flew, T. (2021). Regulating Platforms. Cambridge: Polity

Twitter Help Centre. (2023), Hateful Conduct, https://help.twitter.com/en/rules-and-policies/hateful-conduct-policy 

Be the first to comment

Leave a Reply