The Digital Dilemma: Hate Speech, Online Harm, and Indigenous Australians

Introduction

Over the past few years, the digital world has undergone significant growth and evolution, which has affected how we interact, collaborate, and communicate. Numerous advantages and opportunities have been made possible by this evolution. Nevertheless, new difficulties have emerged due to it, particularly for underprivileged groups like Australia’s Aboriginal people. Social media platforms have made communication between people from various backgrounds simpler. However, they have also turned into safe havens for the online propagation of offensive and hateful speech. Using recent news articles as case studies, we will explore the nuanced interactions between Indigenous Australians and internet culture and governance in this blog post. I intend to conduct a thorough analysis of the challenges Indigenous Australians encounter online and look at potential remedies to make the Internet a more welcoming and secure environment for all users.

Understanding the Landscape: Indigenous Australians and Digital Vulnerabilities

Indigenous Australians have historically been marginalised, and this marginalisation has carried over into the digital sphere, where new platforms for harassment, hate speech, and the dissemination of false information have emerged. This is because online platforms have created more opportunities for harassment, hate speech, and the dissemination of false information. This has worsened the already existent inequality and brought to light the critical necessity of addressing these concerns. The recent warning that the Australian Government issued to social media behemoths such as Facebook, Google, and Twitter to combat misinformation and hate speech during the voice referendum is a testament to the growing concern about these platforms’ role in perpetuating negative stereotypes and narratives about Indigenous Australians. The voice referendum is a case study that can be used to understand better the broader ramifications of hate speech and the harm that can be done online to underrepresented communities.

The Role of Algorithms and Platform Governance in Shaping Online Discourse

The algorithmic makeup of platforms is a significant factor in the formation of online debate. For instance, Massanari (2016) ‘s research on Reddit reveals how algorithms might accidentally foster harmful forms of technological culture. In addition, Gorwa (2019a) underlines the informal regulation of online material through his notion of the platform governance triangle, which might further complicate the scene. In addition, the research that Matamoros-Fernández (2017) conducted sheds light on the function of social media platforms in mediating and distributing debates regarding issues of race. Because of these considerations, substantial questions have been raised regarding the duties of platforms in preventing the spread of hate speech and erroneous information, as well as the requirement for more efficient content monitoring.

Content Moderation and Its Challenges

The administration of the Internet requires that certain types of content be moderated. According to Roberts (2019), content moderators are the people who work behind the scenes to ensure that social media platforms adhere to the policies and guidelines that those platforms have established. However, the existing moderation practises of these platforms frequently fall short of appropriately addressing hate speech and the harm that may be done online, particularly for oppressed communities such as Indigenous Australians. Suppose we want to ensure that digital places are safe for everyone. In that case, we need to take a more proactive and nuanced approach to the management of information.

The Case Study: Government Response to Social Media Giants and the Voice Referendum

The recent response that the Australian Government gave to social media giants sheds light on the growing concern regarding the role that these platforms play in shaping public discourse, particularly during significant events such as the voice referendum. The voice referendum, which marked a momentous milestone for Indigenous Australians, was used to bring to light the broader ramifications of internet cultures and the Government on underrepresented people. During the voice referendum, this section will look deeper into the Government’s response to social media giants, analysing the obstacles and potential solutions in the process of combatting hate speech and online harm.

Background on the Voice Referendum

A constitutional amendment known as the “voice referendum” is now in the works with the intention of establishing a body of representation for Indigenous Australians. This organisation, sometimes known as “the Voice,” works towards the goal of ensuring that Indigenous Australians have a voice in the formulation and execution of policies and laws that have a direct bearing on their everyday life (Australian Government, 2023). A substantial amount of public debate has been sparked by the referendum, with proponents and opponents of the measure expressing their thoughts on a variety of mediums, including social media. (Carlson et al., 2017).

The Issue of Misinformation and Hate Speech

As the voice referendum gains momentum, the risk of misinformation and hate speech spreading on social media platforms increases. Misinformation can lead to confusion and misunderstanding among the public, potentially undermining the democratic process. According to Matamoros-Fernández (2017), hate speech directed at Indigenous Australians might further marginalise the community and impair their ability to engage in constructive conversation on issues that concern them. The Australian Government has responded to these concerns by issuing a warning to social media giants, demanding they take greater responsibility in stopping the spread of disinformation and hate speech during the voice referendum (Butler, 2023b). This warning was given in response to the concerns raised above.

Government’s Response: Putting Social Media Giants on Notice

The decision of the Australian Government to put pressure on social media giants highlights the need for these platforms to have greater accountability. (Gorwa, 2019a; Gorwa, 2019b). The Government has made it abundantly clear that platforms such as Facebook, Google, and Twitter have a critical part to play in ensuring that the democratic process is carried out openly and honestly by sending a message urging companies such as those three to take preventative measures in the fight against hate speech and false information (Butler, 2023b). This action by the Government acknowledges, among other things, the limitations of existing regulatory frameworks in dealing with the challenges posed by the digital world. (Flew, 2021). Because of the Government’s intervention, additional regulations may be necessary in order to guarantee a secure online environment, particularly for vulnerable communities such as Indigenous Australians. This realisation may pave the way for future legislation and regulation that is more comprehensive and efficient.

Challenges in Addressing Hate Speech and Online Harm

When it comes to preventing harmful speech and content from being shared online, social media platforms face many obstacles. One of the most significant obstacles is recognising and articulating what constitutes hate speech. Given the highly subjective nature of hate speech, identifying what qualifies as such can be complex and highly contentious. (eSafety Commissioner, 2022). In addition, it can be challenging to balance protecting people’s rights to free speech and making the Internet a place where everyone feels welcome and safe. (Gosse & Hodson, 2021). The sheer amount of content generated by social media platforms presents another obstacle. Content moderators cannot manually review every piece of content because there are millions of new posts, comments, and shares daily. (Roberts, 2019). This problem is made even worse by the fact that harmful content can frequently be understated or camouflaged, making it even more challenging for moderators to recognise and eliminate such content when it is present. (Massanari, 2016).

Potential Solutions and the Role of Social Media Platforms

The challenges of harmful online speech and conduct must be met head-on by social media platforms using a comprehensive and multi-pronged strategy. (Sinpeng et al., 2021). This may involve the following:

  1. Social media platforms should invest in cutting-edge technologies, such as artificial intelligence and machine learning, to improve their ability to identify and remove harmful content. These technologies can assist in identifying patterns and trends associated with harmful online speech and can make content moderation more proactive and effective. (Roberts, 2019).
  2. Educating users on how to identify and report harmful content and promoting digital literacy and responsible online behaviour are both important roles that platforms can play in helping to educate users on how to identify and report harmful content. It is possible to contribute to the creation of an online environment that is more welcoming and secure by encouraging users to be more discriminating about the content they consume and share. (Carlson & Frazer, 2018).
  3. Enhancing Community Guidelines: Platforms should develop and enforce clear guidelines that explicitly address hate speech, misinformation, and other forms of online harm. These guidelines should be developed in order to strengthen community guidelines. All users, regardless of their status or level of influence on the platform, should be subject to these guidelines clearly and uniformly. (Gorwa, 2019a).
  4. Collaborating with Stakeholders: To combat hate speech and other forms of online harm, social media platforms should collaborate with various stakeholders, including governments, civil society organisations, and experts. Stakeholders are able to develop a more comprehensive understanding of the issues and devise more effective strategies to address them when they collaborate on addressing the problems. (Gorwa, 2019b).
  5. Empowering underrepresented communities: Platforms should also make an effort to amplify the voices of underrepresented communities, such as Indigenous Australians, in order to empower these groups. This can include providing these communities with the resources, support, and opportunities to participate in meaningful online conversations and share their points of view. (Carlson et al., 2017).
Implications of the Government’s Response and the Future of Internet Governance

The response of the Australian Government to the social media giants that participated in the voice referendum sheds light on the growing awareness and urgency surrounding the issue of harmful speech and behaviour that can be found online. It highlights the necessity for more effective governance in the digital world, particularly in protecting the rights and well-being of communities that are marginalised in society.

This case study raises important questions about the future of Internet governance, including the responsibilities of social media platforms in developing an online environment that is both safe and welcoming to all users. It also highlights the necessity for a collaborative approach in which governments, social media platforms, and civil society all work together to address the challenges that are posed by the digital realm.

The Government’s intervention during the voice referendum may catalyse broader discussions and actions regarding Internet governance. It is crucial to prioritise the safety and well-being of all users, especially vulnerable communities like Indigenous Australians, as we continue to navigate the complex interplay of internet culture and governance.

In conclusion, the case study of the Australian Government’s response to social media giants during the voice referendum offers valuable insights into the challenges that are involved in addressing hate speech and online harm, as well as potential solutions to these problems. We can better understand the experiences of Indigenous Australians online if we critically examine this case study and engage with key concepts in internet cultures and governance. This will allow us to work towards creating a safer and more inclusive digital environment for everyone.

Conclusion

In conclusion, addressing hate speech and online harm within internet cultures and governance is a multifaceted and intricate challenge. Through a comprehensive exploration of academic articles, the case study of the voice referendum, and the Australian Government’s response to social media giants, we have explored the complexities of internet governance, content moderation, and the impact on Indigenous Australians.

Drawing from the insights provided by the studies, fostering a safer and more inclusive digital environment for all requires a collaborative approach between social media platforms, regulators, and users. The case study highlights the importance of understanding the experiences of Indigenous Australians online, acknowledging their unique vulnerabilities, and working towards empowering their voices.

Social media platforms must invest in advanced technologies to improve content moderation, strengthen community guidelines, and promote digital literacy and responsible online behaviour. Additionally, collaboration with governments, civil society organisations, and experts is essential for developing more effective strategies to address hate speech and online harm.

Ultimately, the responsibility for creating a safer digital world is shared by all stakeholders, as they must work together to ensure that everyone can communicate, learn, and thrive without the fear of harassment or discrimination.

References

  1. Australian Government. (2023). About | Aboriginal and Torres Strait Islander Voice. Niaa.gov.au. https://voice.niaa.gov.au/about
  2. Butler, J. (2023a, March 27). Anthony Albanese criticises “very strange” question on whether voice will have input on energy policy. The Guardian; The Guardian. https://www.theguardian.com/australia-news/2023/mar/27/anthony-albanese-criticises-very-strange-question-on-whether-voice-will-have-input-on-energy-policy
  3. Butler, J. (2023b, March 28). Government puts social media giants on notice over misinformation and hate speech during voice referendum. The Guardian; The Guardian. https://www.theguardian.com/australia-news/2023/mar/29/government-puts-social-media-giants-on-notice-over-misinformation-and-hate-speech-during-voice-referendum
  4. Carlson, B. L., Jones, L. V., Harris, M., Quezada, N., & Frazer, R. (2017). Trauma, Shared Recognition and Indigenous Resistance on Social media. Australasian Journal of Information Systems, 21. https://doi.org/10.3127/ajis.v21i0.1570
  5. Carlson, B., & Frazer, R. (2018). Social media mob: being Indigenous online. In researchers.mq.edu.au. Macquarie University. https://researchers.mq.edu.au/en/publications/social-media-mob-being-indigenous-online
  6. eSafety Commissioner. (2022). Online hate speech | eSafety Commissioner. ESafety Commissioner. https://www.esafety.gov.au/research/online-hate-speech#:~:text=Hate%20speech%20is%20recognised%20as,to%20harassment%20and%20violence%20offline.
  7. Flew, T. (2021). Regulating Platforms.
  8. Gorwa, R. (2019a). What is platform governance? Information, Communication & Society, 22(6), 854–871. https://doi.org/10.1080/1369118x.2019.1573914
  9. Gorwa, R. (2019b). The platform governance triangle: conceptualising the informal regulation of online content. Internet Policy Review, 8(2). https://doi.org/10.14763/2019.2.1407
  10. Gosse, C., & Hodson, J. (2021, April 29). Not two different worlds: QAnon and the offline dangers of online speech. The Conversation. https://theconversation.com/not-two-different-worlds-qanon-and-the-offline-dangers-of-online-speech-159668
  11. Massanari, A. (2016). #Gamergate and The Fappening: How Reddit’s algorithm, governance, and culture support toxic technocultures. New Media & Society, 19(3), 329–346. https://doi.org/10.1177/1461444815608807
  12. Matamoros-Fernández, A. (2017). Platformed racism: the mediation and circulation of an Australian race-based controversy on Twitter, Facebook and YouTube. Information, Communication & Society, 20(6), 930–946. https://doi.org/10.1080/1369118x.2017.1293130
  13. Roberts, S. T. (2019). Behind the Screen: Content Moderation in the Shadows of Social Media (pp. 33–72). Yale University Press.
  14. Sinpeng, A., Martin, F. R., Gelber, K., & Shields, K. (2021). Facebook: Regulating Hate Speech in the Asia Pacific. Facebook Content Policy Research on Social Media Award: Regulating Hate Speech in the Asia Pacific. Sydney eScholarship. https://doi.org/10.25910/j09vsq57

Be the first to comment

Leave a Reply