Online Hate Speech VS. Internet Governance, Twitter VS. eSafety

Kidwai , A., & Carroll, L. (2023). [BBC News]

The eSafety commissioner in Australia has accused X (former Twitter) of indulging its users’ unbearable content on indigenous Australian and LGBTQI+ communities and has indicated the corrections that could be made to X’s online governance. Similar issues have happened repeatedly since the advent of social media, for example, the Gamergate controversy and the Asian hate speech during the COVID-19 pandemic. What these incidents have in common is that they are not only the result of a lack of regulation and governance for the platform but also are related to users’ freedom of expression. This blog intends to explain and navigate the relationship between Media Governance and online harassment, drawing insights from the accusation of the eSafety commissioner to X. Through analysing the case, we will have a better understanding of the challenges and opportunities in regulating online platforms to solve the issue of hate speech and online harassment. We’re exploring how those oversight agencies ensure social media platforms are held accountable, the legal and ethical questions about what’s allowed online, the freedom of speech, and how poorly managed media rules affect people and society. By looking into this case, the thesis aims to provide insights into how we can improve media rules to tackle online hate speech and make the internet safer for everyone. 

Internet governance can be described as the policies and regulations that users should be aware of when exchanging information online (Flew, 2021, p. 169). The definition is embedded even more explicitly when it comes to online platforms. Platforms like X, Facebook, Weibo, etc., are designed for users to share opinions and exchange perspectives. However, when expressing oneself online, people can be attacked deliberately or accidentally. When people get hurt intentionally, they are being attacked by hate speech, which is defined as ‘‘Expresses, encourages, stirs up, or incites hatred against a group of individuals distinguished by a particular feature or set of features such as race, ethnicity, gender, religion, nationality, and sexual orientation’ (Flew, 2021, p. 91)  The users also monitor the presence of hate speech by reporting inappropriate content to the authorities if they spot it. The manifestations of hate speech in digital spaces are various and can extend to multiple aspects. The United Nations has summarized the form of expression into “images, cartoons, memes, objects, gestures and symbols,” which can be delivered either online or offline. This is where media governance plays a significant role in combating online hate speech. With the policies and standard frameworks acting like a surveillance camera, online platforms are forced to filter their users’ content; otherwise, the platform itself will be held accountable. It also helps to prevent potential physical and mental harm, which could lead to severe offline outcomes. Studies have shown that the victims of online hate speech are more likely to suffer from mental health issues and insecurity, which could lead to further disabilities in their lives. (Arne Dreißigacker et al., 2024) In addition, it improves the credibility of online platforms, providing users with trust and reliance to express their opinions. 

What happened between the eSafety Commissioner and Twitter?

The eSafety Commissioner is responsible for Australians’ online safety, protecting users from inappropriate content and raising users’ awareness of online safety. It also develops policies and regulations by gathering data and collaborating with stakeholders like governments, industries, and academia to keep up with the upgrading trends of online society. The case between The eSafety Commissioner and Twitter X stems from the significant number of complaints received by eSafety about the hate speech towards marginalised communities and indigenous Australians after Elon Musk bought the platform. eSafety also accuses Elon Musk of reinstating the 62,000 accounts suspended or banned. Therefore, eSafety requires Twitter to produce documents addressing the current toxic situation online, which Elon Musk compelled on behalf of freedom of speech. The document serves not only as a statement of how to address the issue, but also as a promise to regulate the user content properly.

What is Twitter’s existing Policies and regulations?

As Twitter certainly needs to be accountable for overlooking the spread of online hate speech on the platform, we shall examine its existing policies on online abuse. Twitter issued “hateful conduct” to cope with the spread of online hate speech, stating that posts that contained a violation of other people’s rights would be prohibited and deleted, or even the user account would be suspended. Moreover, some reporting tools enable users to report any content they find offensive and violates their rights. Most importantly, Twitter has formed a Content Research Moderation Consortium, which shares an enormous scale database with many global group members across multiple fields. With the accusation of The eSafety Commissioner, Twitter failed to comply with its policies and regulations on online hate speech. The fact of complaints received by eSafety indicates the content moderation of Twitter needs to be updated or re-examined. According to Ms Inman Grant, the head of the eSafety Commissioner, Twitter has cut off around 6500 employees, which could possibly explain the lack of content moderation control. In this way, we can say that Twitter has ignored the rights of marginalised communities being violated and refused to provide documentation that could purify the platform’s environment.  

What Happens After Hate Speech?

The case between eSafety and Twitter draws our insight into a significant concern: the impact and consequences of inadequate media governance on online hate speech. One of the essential influences is the mental well-being of the victim of online hate. As we all know, online hate speech can be spread rapidly; it is different from a street fight, which will fade away after days. Even though the original hate speech post was deleted from the platform, it remains present on the internet. Some of them can even cause discussions as a negative form of participation on platforms (Burgess et al., 2018, p257), which may affect the victim for a lifelong period (Arne Dreißigacker et al., 2024) and lead to social isolation from their community and even family. As a substitution for action, online hate speech is likely to influence the victim to develop anxiety and feel insecure about the surrounding environments, which is often seen as “devastating as physical pain and material loss.”(Dowd et al., 2006, p286) Moreover, some of the online abuse can escalate to physical violence, taking the example of the surge of Asian hate during the COVID-19 pandemic. At first, there were memes about “Kung flu” and people eating dogs spread within the anti-Asian group. Then, emerging cases of street violence targeted at Asians were on the News all the time. This instance can also lead to another social impact on racial groups and marginalized communities, which is undermining social cohesion. Both indigenous Australians and LGBTQI+ communities are considered minority in not only the Australian society but also the world. The hate speech targeted at them might generate racial divisions and feelings of insecurity in their community and how they define themselves, which leads to further extreme actions and tensions that pose a threat to society. Lastly, it may unnecessarily restrict the freedom of speech within the platform for people who are aware of their words. On the other hand, inadequate media governance of online hate speech can also lead to severe consequences. The lack of governance provides advantages for those who stir up conflict. Elon Musk’s refusal to offer correcting documents could foster persistent harm to Australian society by giving wordless permission for hate speech proliferation. This led to users, not only those who were attacked but also those who observed this online discrimination, abandoning the platform because of a lack of trust and safety. It is also vital for Twitter to set an example for other small platforms, since platforms adjust their guidelines concerning each other. (Burgess et al., 2018, p264) 

With all the negative impacts and serious consequences caused by lacking platform governance, it is necessary to acknowledge what protects the hate speech when there is countless discouragement. Elon Musk continues to believe that letting his users enjoy the rights of freedom of speech is not a form of encouraging online hate speech, even if the targeted victims are protected under defamation law in Australia. This demonstrates the blurry boundaries between policies and regulations, where hate speech is covered by human rights legislation that encourages users to express themselves freely. This enables hate speech to hide under the grey area between these legislations. 

Are There Any Challenges?

Governing a massive platform like Twitter is undoubtedly challenging in every aspect. Firstly, as one of the biggest platforms, Twitter generates enormous amounts of content every second, which requires a great deal of work to monitor every tweet and response. This would require not only precise content moderation for machines but also sufficient human labour to detect any possible harmful content. Secondly, because of the constant update of information on the internet, some languages are hard to interpret, such as abbreviations, ironies, and content concerning specific cultures. Moreover, Twitter is a platform that operates in real-time; therefore, it is difficult to track down and eliminate content after it gets widespread on the internet. Last but not least, there are 368 million Twitter users from all around the world. Hence, it needs to cooperate with stakeholders worldwide in every potential area, like culture and customs, to connect and update its content moderation to apply to the background of every user.      

There are possible solutions.

To make Twitter a space that is safe and credible for every user, especially in the case of indigenous Australians and the LGBTIQ+ community. Here are some adjustments Twitter could integrate into its decision-making process. One strategy they could employ is to anticipate the trend of online society and make adjustments in advance in response to anything that has the potential to go against specific groups. (Burgess et al., 2018, p264)  Another strategy is to tie up with international stakeholders and balance and solve the issues of different interests. In this case, Twitter could negotiate with the eSafety Commissioner and partner with multi-stakeholders like social organizations, government agencies, and academic researchers in Australia to agree on content that needs moderation and correction, thereby creating a legal framework to complete its governance. (Weber, 2014, p5) What’s worth noting is that the group that sets platform policies only takes up a small percentage of the company, which is difficult for users to reach. Therefore, Twitter should conduct user research domestically and internationally to listen to users’ voices and, therefore, determine what needs moderation comprehensively besides conventional content. Most importantly, to gain trust, Twitter needs to enhance its transparency. For example, it can provide accessible information on how many reports are being processed and what content is removed for users to surveil. What users can do is reject participating in online hate speech and be bystanders of online abuse to create a harmonious environment for every user.

This blog explores the challenges and opportunities in regulating online platforms to address the issue of hate speech and online abuse by looking into the case between the eSafety Commissioner and Twitter. It also illustrated the possible impact and consequences of a platform that lacks governance and provides potential solutions for Twitter to improve its regulations. Even though there is still a long way to go, our goal is for the internet to evolve into a space where users can freely express their opinions without harming the interests of others.

Reference

  • An overview of eSafety’s role and functions. (2021). https://www.esafety.gov.au/sites/default/files/2021-07/Overview%20of%20role%20and%20functions_0.pdf
  • Arne Dreißigacker, Philipp Müller, Isenhardt, A., & Schemmel, J. (2024). Online hate speech victimization: consequences for victims’ feelings of insecurity. Crime Science, 13(1). https://doi.org/10.1186/s40163-024-00204-y
  • ‌Burgess, J., Marwick, A. E., & Poell, T. (Eds.). (2018). The sage handbook of social media. SAGE Publications, Limited.
  • ‌Dowd, N. E., Singer, D. G., & Robin Fretwell Wilson. (2006). Handbook of children, culture, and violence. Sage Publications.
  • Flew, T. (2021). Regulating Platforms. Polity Press.
  • United Nations. (2023). What is hate speech? United Nations. https://www.un.org/en/hate-speech/understanding-hate-speech/what-is-hate-speech
  • Weber, R. H. (2014). Shaping internet governance : regulatory challenges. Springer.
  • Picture source: Kidwai , A., & Carroll, L. (2023). [BBC News].

Be the first to comment

Leave a Reply