Why Facebook’s content moderation system is unable to address racist speech towards Indigenous Australians?

Have you ever experienced racist speech in everyday life? You are lucky if you don’t because it’s really like being slapped in the face. Racist discourse is one representative form of hate speech, and its harm is comparable to apparent physical injury to minorities, which brings direct insults and long-term pain to receivers simultaneously. (Sinpeng et al., 2021, p. 6).

This harm has been transformed and aggravated on social media platforms. Platforms are not neutral as one major mediator of our online activities (Matamoros-Fernández, 2017, p. 933). They reinforce current power hierarchies and accelerate the spread of racist speech dramatically, which expose minorities to discrimination and violence (Carlson & Frazer, 2018, p. 12). Such racism is common on Facebook in Australia, particularly towards indigenous people. Although we can see Facebook’s regulatory efforts on the issue, from the Combatting Online Hate Advisory Group to Facebook Oversight Board (Sinpeng et al., 2021), its moderation is still strictly criticized for being irrational and favouring racists (Matamoros-Fernández, 2017, p. 931; Kalsnes & Ihlebæk, 2021, p. 327). Therefore, this blog will analyse why Facebook’s moderation system is weak in the racist speech issue towards indigenous Australians.

Main Components of Facebook Moderation System in Australia

Before discovering the reasons, let’s first discuss Facebook’s content moderation ecosystem and how each component in this complicated infrastructure works. Main three parts are (Figure 1) (Sinpeng et al., 2021, p. 18):

  • Public and Content Policy Team
  • Global Operations Team
  • Community Integrity Team

Figure 1: Facebook Content Moderation Ecosystem (Sinpeng et al., 2021, p. 18)

Three areas work cross-functionally, containing and supported by multiple roles, mainly including (Matamoros-Fernández, 2017, p. 931; Siapera & Viejo-Otero, 2021; Sinpeng et al., 2021):

  • Voluntary users
  • Outsourced content reviewers
  • Artificial Intelligence (AI)
  • Market Specialists
  • Trusted partners
  • Stakeholders

All components are significant in disposing racist speech towards Aboriginal Australians on Facebook, especially voluntary users, outsourced reviewers, and AI.

Voluntary users

To moderate exponentially increased racist discourse (Flew, 2021, p. 92), Facebook settles the flagging tool as a norm of its affordance and puts it in a conspicuous place to invite users to report controversial content (Matamoros-Fernández, 2017, p. 936). Therefore, it gains millions of voluntary moderators in Australia. When users see what they perceive as racist content, they can flag it and generate a complaint queue to formal moderators (Gillespie, 2017, p. 267).

However, the flagging activity is a double-edged sword:

  1. Users often lack motive since it’s totally voluntary and unpaid (Gillespie, 2017, p. 268)
  2. As user preference is the only standard, many complaints are unduly subjective. Users always flag content because they don’t like it or the publisher rather than the content being discriminatory (Suzor, 2019, p. 24).
  3. As a function equal to all Facebook users, flagging can be used by abusive users to attack anti-racists’ accounts and content. For example, Celeste Liddle’s post supporting Aboriginality was removed automatically after malicious users repeatedly flagged it (Matamoros-Fernández, 2017, p. 936).

Outsourced content reviewers

Numerous flagged contents will be automatically sent to outsourced moderators in developing countries. Every worker offers a frontline review of thousands of Facebook posts per day and adjudicates whether these are racist. It’s low-paid and repetitive but challenging because they have to make a decision of around 10 seconds for each post (Suzor, 2019, p. 16).

It means that the flagged user’s identity and the surrounding context of the post will be largely ignored (Gillespie, 2017, p. 267). Therefore, many posts, including racist subtexts or concealing discrimination towards indigenous Australians through humour, will not be detected and removed (Matamoros-Fernández, 2017, p. 938), which will be analyzed later.

AI

Facebook also launched AI to moderate content to decrease subjectivity. AI moderator scores flagged content based on its similarity to previous content judged as racist by human moderators. Content whose score is higher than a pre-set limit will be removed automatically; otherwise will be sent to human moderators (Siapera & Viejo-Otero, 2021, p. 124; Sinpeng et al., 2021).

AI improves Facebook moderation. In 2019, Facebook AI moderators detected and deleted 80.2% of hate speech content globally, despite no clear information about racist content towards Aboriginal Australians (Siapera & Viejo-Otero, 2021, p. 124)

However, using AI can’t make the moderation system neutral. Subjectivity still exists as the score depends on former judgements from human moderators. Moreover, as racist content data becomes an indispensable source for optimizing AI moderation and making larger profits (Siapera & Viejo-Otero, 2021, p. 127), will Facebook do everything it can to curb racist discourse?

Other actors are also important in Facebook moderation. Market specialists oversee potential racist content and offer a warning in advance; trusted partners are crucial in monitoring racist discourse evolution and urgent publicity problems, and stakeholders provide multiple strategies from society to the Facebook Public Policy team (Sinpeng et al., 2021, pp. 20-21).

Why is Facebook’s Moderation System Weak? Starting from its defective Top-level Design and Operating Mechanism

As an American tech company, Facebook is subject to Section 230 of U.S. telecommunication law (safe harbor regulation). The law ensures:

  1. Facebook isn’t liable for users’ posts as a mediator;
  2. Facebook can retain its safe harbor status even if moderate users’ content;
  3. Facebook isn’t required to meet any standards of moderation.

This means that Facebook moderation is totally voluntary in essence (Matamoros-Fernández, 2017, p. 935; Siapera & Viejo-Otero, 2021, p. 116). All efforts to improve its moderation apparatus aim at maintaining users and generating larger profits rather than curbing specific information.

No emphasis on the Top-Level Design

Principles, Terms of Services, and Community Standards are the main top-level design determining how Facebook operates. These texts offer ideological parameters to hate speech moderation, in which Principles has the highest priority (Siapera & Viejo-Otero, 2021, p. 118).

Principles embodies fundamental equality and emphasizes libertarian individualism (Siapera & Viejo-Otero, 2021, p. 120). It states that “People deserve to be heard and to have a voice — even when that means defending the right of people we disagree with” and “We have a responsibility to promote the best of what people can do together by keeping people safe and preventing harm” (Figure 2) (Meta, n.d.).

Figure 2: Facebook’s Five Principles. “Give People a Voice” and “Keep People Safe and Protect Privacy” are shown here. Source: Meta.

Principles treats all users equally and operates all content similarly. It explicitly encourages information circulation while ambiguously mentions potential harm. Such arithmetic equality contributes little to alleviate structural oppressions towards indigenous Australians and “reinforces the post-racial idea of color-blindness” (Siapera & Viejo-Otero, 2021, p. 120).

Under the Principles, hate speech is first mentioned in the Terms of Services. Although Mark Zuckerberg said it is “the governing document that we’ll all live by”, there are neither clear definitions and rules about hate speech nor much attention on racism. It allows humour, jokes, and satires related to hate speech topics to protect freedom of speech but doesn’t explicitly explain the scope of these (Matamoros-Fernández, 2017, pp. 935-936; Siapera & Viejo-Otero, 2021, p. 114).

The ambiguity accelerates the spread of racist speech and provokes harm. It’s unequal to all societal minorities, not just indigenous Australians, as humour and satires are always used to dissemble racism and hatred (Matamoros-Fernández, 2017, p. 936).

It’s not until Facebook Community Standards that we find regulations directly addressing hate speech. They are categorized as Objectionable Content, after Violence and Criminal Behavior and Safety (n.d.).

However, racist discourse is not emphasized but as a sub-category of hate speech. Fundamental equality is prominent here as all races are considered equivalent and equally protected by the rules (Siapera & Viejo-Otero, 2021, p. 121). Again, the so-called equality is race-blindness in essence that makes discrimination towards indigenous Australians worse.

Problematic Operation Mechanism

As mentioned before, Facebook primarily aims to facilitate information circulation. Based on the platform’s algorithm for achieving this aim through liking, sharing and commenting, content owning higher metrics will get better promotion and attract more users’ attention to engage (Matamoros-Fernández, 2017, p. 938). As borderline content alluding to racism or veiled by humour is easier to attract users from both sides (Siapera & Viejo-Otero, 2021, p. 125), small and simple activities from users result in widespread racist discourse.

Facebook’s “post-then-filter” moderation approach exacerbates the diffusion further (Suzor, 2019, p. 15; Gillespie, 2017, p. 265). Controversial content can be published immediately to attract users with contrary perspectives without moderation until it is flagged and removed or obsolete over time.

Therefore, racist content isn’t clearly defined and emphasized in the top-level design and is covertly encouraged to disseminate under Facebook’s operating mechanism. Since the moderation system operates under these defective guidelines, it’s understandable why it fails to address racist content towards Aboriginal Australians.

Continuing with Racist Speech’s Blurry Definition and People’s Different Understanding in Practice

Besides Facebook’s problematic top-level design and mechanism, racist speech’s blurry definition and people’s different understanding in practice also impact moderation.

Humour, Jokes, and Subtexts

Facebook connives in racist content concealed by humour, “scientific” language, and jokes, particularly in meme pages and comments. By contrast, typical racism is much less (Carlson & Frazer, 2018, p. 12; Flew, 2021, p. 92; Matamoros-Fernández, 2017, p. 938).

For example, aboriginal memes (Oboler, 2012) are often used to express Australian-specific stereotypes in the name of “humour” or “funny” (Figure 3) (Carlson & Frazer, 2018, p. 12).

Figure 3: stereotypical and racist aboriginal memes. The first one shows substance racism, while the second one vilifies indigenous people’s intelligence (Oboler, 2012).

These memes, together with other more implicit racist comments, are hard to be detected by most outsourced moderators in a decontextualized nature (Gillespie, 2017, p. 267), as they become racism only when directed at indigenous people under Australia-specific contexts.

Where Is the Line?

After being concealed by humour, content that is racist in essence becomes borderline stuff that depends on personal understanding (Siapera & Viejo-Otero, 2021, p. 125). For example, several indigenous interviewees say their friends describe them in humourous derogatory methods, such as “too white” to be indigenous (Carlson & Frazer, 2018, p. 12).

So different people have reverse understanding and sensitivity to such veiled speech. Indigenous people, as direct receivers, always feel offended and uncomfortable, while their non-indigenous friends see it as a joke. The blurring line makes moderators who stand as a third-party hard to resonate with receivers and decide where boundaries should be in a decontextualized nature (Kalsnes & Ihlebæk, 2021, p. 330).

Case Study: Pauline Hanson’s Facebook Controversial Post

Pauline Hanson is a senator in Queensland. On July 27, 2022, she sent a post about “acknowledgement of country” on Facebook, which raised discussions widely (https://www.facebook.com/watch/?v=1004793673523975&ref=sharing). The post has over 300K views with 26K likes and 11K comments.

In the post, Hanson refuses to acknowledge the country in the Senate and believes it reinforces racial division in Australia because it belongs to all non-indigenous Australians. She says displaying the Aboriginal flag in the Senate chamber is unequal to others, as “Parliament is the people’s house”. She also claims the acknowledgement is just a token that can “do nothing to address indigenous disadvantage or close the gaps” (Hanson, 2022).

The post focuses on a sensitive problem in Australia and doesn’t stand with Aboriginal people. But under Facebook’s Principles, it’s also a voice directed at a controversial topic rather than indigenous individuals or groups. The language itself seems scientific and logical – at least for thousands of people supporting her, and doesn’t reach the extent of harming and attacking someone explicitly.

However, the post embodies post-racial ideas of color-blindness because its subtexts indicate an ideology of all Australians being equal. It perfectly coincides with the fundamental equality permeated in Facebook’s top-level design and moderation guideline, not to mention that she uses tricky language broadly to veil race-blindness (Flew, 2021, p. 92; Siapera & Viejo-Otero, 2021, p. 120). So it’s intelligible that the moderation team fails to address the post in a decontextualized nature.

Metrics and comments show how this borderline post raised disputes (Figure 4). The top comment received around 1.4K likes and 338 replies. This comment is appropriate in many political posts, and the language itself doesn’t reflect any racist preference. But its subtexts are completely reversed as it stands with Hanson. The second top one, with 568 likes, is similar and supports Hanson’s ideas more implicitly. More obvious racist speech towards indigenous people and the trace of regulation can be easily detected by scrolling comments, proving moderators’ efforts on it. But it also simultaneously shows moderation’s limitations and Facebook’s defective mechanism.

Figure 4: A screenshot of Pauline Hanson’s post and two top comments. Source: Meta.

Conclusion: Change from the Top and More Accurate Moderation are Needed

The blog shows the Facebook moderation team’s efforts, improvements, and inability to regulate racist content towards Indigenous Australians. Although multiple roles show their strengths, no other internal power limits their weakness. The moderation team follows Facebook’s Principles, Terms of Services, and Community Standards to adjudicate content, but these top-level guidelines fail to clearly define and emphasize racist speech and reflect race-blindness. These mislead and confuse moderation. Change from the top is needed.

Moreover, racist speech’s complex definition in practice and people’s different understandings complicates moderation. As humour and scientific language become prevalent in posts, comments, and memes, the line between racist speech and mere offensive or hurtful words blurred. Subtexts that only deliver racist meaning in specific contexts emerge more frequently than explicit racism. These are embodied in Pauline Hanson’s case, which makes moderators fail to detect and remove in a decontextualized nature. More accurate moderation and more moderators having specific backgrounds are needed.

References

Carlson, B., & Frazer, R. (2018). Social media mob: being Indigenous online. Macquarie University.

Facebook Community Standards. (n.d.). Retrieved from https://transparency.fb.com/en-gb/policies/community-standards/

Flew, T. (2021). Hate Speech and Online Abuse. In Regulating Platforms (pp. 91-94). Cambridge : Polity Press.

Gillespie, T. (2017). Regulation of and by Platforms. In J. Burgess, A. Marwick & T. Poell (Eds.), The SAGE Handbook of Social Media (pp. 254-278). SAGE Publications, Limited.

Hanson, P. (2022, July 27). I consider that ‘acknowledgement of country’ perpetuates racial division in Australia. Facebook. https://www.facebook.com/watch/?v=1004793673523975&ref=sharing

Kalsnes, B., & Ihlebæk, K. A. (2021). Hiding hate speech: political moderation on Facebook. Media, Culture & Society, 43(2), 326–342. https://doi.org/10.1177/0163443720957562

Matamoros-Fernández, A. (2017). Platformed racism: the mediation and circulation of an Australian race-based controversy on Twitter, Facebook and YouTube. Information, Communication & Society, 20(6), 930–946. https://doi.org/10.1080/1369118X.2017.1293130

Meta (n.d.). Our Principles. Retrieved April 9, 2023, from https://about.meta.com/company-info/

Oboler, A. (2012). Aboriginal Memes & Online Hate. Online Hate Prevention Institute. http://ohpi.org.au/reports/IR12-2-Aboriginal-Memes.pdf

Siapera, E., & Viejo-Otero, P. (2021). Governing Hate: Facebook and Digital Racism. Television & New Media, 22(2), 112–130. https://doi.org/10.1177/1527476420982232

Sinpeng, A., Martin, F., Gelber, K., & Shields, K. (2021). Facebook Content Policy Research on Social Media Award: Regulating Hate Speech in the Asia Pacific. Department of Media and Communications, The University of Sydney. https://r2pasiapacific.org/files/7099/2021_Facebook_hate_speech_Asia_report.pdf Suzor, N. (2019). Who Makes the Rules?. In Lawless: the secret rules that govern our lives (pp. 10-24). Cambridge University Press.

Be the first to comment

Leave a Reply