From Policy to Protection: How the UK and Australia Are Shaping a Safer Internet

Introduction: Navigating the Moderation – The Battle Against Hate Speech and Online Harms

In the 21st century, fast-growing digital platforms and social media are facing many concerns. These concerns are showcased as hate speech and online harms that threaten online safety in our digital communities (Flew, 2021). The worldwide adoption of digital platforms has given freedom and voices to billions of users, bridging diverse cultures (Howard, 2019). However, it has also caused online abuse and various forms of hate speech that erode civic discourse. According to Pew Research Centre (2021), forty-one percent of U.S. adults have experienced severe online harms and 55% consider it a crucial issue.

Figure 1 majority say online harassment is a major problem; 41% have personally experienced this, with more than half of this group experiencing more severe behaviours (Pew Research Centre, 2021)

In response to these challenges, countries worldwide are dealing with the difficult task of reducing online harm. Specifically, countries such as the United Kingdom and Australia have advocated policy initiatives to protect their citizens in the online space. The UK’s Online Harms White Paper and Australia eSafety Commissioner stand as significant examples of wise approaches to internet regulation, guiding the way for other nations.

What are Online Harms and Hate Speech? Why? What Are the Consequences?

Online harms cover a wide range of negative behaviours, including online harassment, cyberbullying, spreading misinformation, and unauthorized sharing of private information, while hate speech refers to communications that degrade, intimidate, or incite violence against individuals or groups based on race, religion, gender, physical appearance, and sexual orientation (Flew, 2021:115). Cyberbullying is when someone uses any online platform to make a child or adult feel upset or overwhelmed (eSafetyCommissioner, 2024). eSafety Commissioner (2021) indicated that forty-four per cent of Australian young people reported a negative online experience, and this includes 15% who suffered threats or online abuses.

Figure 2 What is cyberbullying? (eSafetyCommissioner, 2024)

Race, sexual orientation, and religion were the top concerns among Australian adults when asked to explain what constitutes hate speech (eSafetyCommissioner, 2024).

Figure 3 Hate Speech Wordcloud (Arloo, n.d.)

Figure 3 illustrates the visual representation of the unprompted words people use to define hate speech. It also emphasizes the broad understanding of hate speech. Hate speech was frequently identified as any negative statement which was directed at another person. Thus, hate speech is seen as “going beyond the incitement or spreading of hate” to become a communication that is harmful or simply offends (eSafetyCommissioner, p.7).

The emergence of online harm and hate speech is closely related to the intrinsic features of the internet, including anonymity, permanence, and the vast potential for dissemination. The anonymity offered by digital platforms often emboldens users to engage in behaviour they might avoid in offline interactions, leading to an online disinhibition effect (Hollenbaugh & Everett, 2013). In general, the online disinhibition effect is a phenomenon where people behave more freely and express themselves more openly or even aggressively on the internet than they would in face-to-face interactions. Hollenbaugh & Everett (2013) have identified two types of online disinhibition which are benign disinhibition and toxic disinhibition. Benign disinhibition can contribute positive outcomes such as emotional sharing and support. Instead, toxic disinhibition can result in negative behaviours, including flaming, cyberbullying, hate speech, and other forms of online harassment. Indeed, anonymity means that the internet often allows users to hide their real identities, for example, anonymous posts or comments which can reduce accountability and the fear of judgment. This leads to more disinhibited behaviours. Some other reasons for online harassment are identified as “political or religious beliefs, race, gender, and sexual orientation” (Flew 2021, p.115). For instance, in the eSafety Commissioner’s 2019 report, political views (21%), religion (20%), and gender (20%) were ranked as the top 3 reasons for cyberbullying. Moreover, for LGBTQI communities, 61% of them report that “sexual orientation was the main reason for suffering from online hate speech” (eSafety Commissioner, 2019). In addition, about half of LGBTQI+ students reported they have experienced online harassment which was a rate higher than average (Do Something, n.d.).

Figure 4 Reasons for online hate speech in the 12 months to August 2019 (eSafety Commissioner, 2019)

The consequences of online harms and hate speech are serious and irreversible. For individuals, victims might experience psychological distress, social isolation, and in severe cases, physical harm (Citron, 2014). These consequences are constituted of broader societal impacts, including the marginalization of vulnerable groups (Waldron, 2012). eSafety Commissioner (2019) reported that 37% of people reported mental or emotional stress as the result of hate speech and 14% relationship issues.

Figure 5 Top 5 negative effects of online hate speech, August 2019 (eSafety Commissioner, 2019)

The U.K. and Australia Policy Responses to Online Harms and Hate Speech

As digital platforms become increasingly essential to our social, cultural, and political discourse, global governments and regulatory organizations have tried to mitigate the risks associated with online harms and hate speech through a variety of policy responses. Two important examples of such initiatives are the United Kingdom’s Online Harms White Paper and Australia’s eSafety Commissioner. They represent a distinct approach to regulating and moderating online spaces to protect users from online abuse, harassment, and misinformation.

“If we surrender our online spaces to those who spread hate, abuse, fear and vitriolic content, then we will all lose.”

Rt Hon Jeremy Wright MP & Rt Hon Sajid Javid MP (Department for Digital, Culture, Media & Sport, 2020)

The Online Harms White Paper was published in April 2019. It outlined a comprehensive framework that aimed to make the internet safer. The UK Home Secretary believed that the Internet can be utilized as a medium to “spread terrorist and other illegal or harmful content, erode civil discourse, and abuse other people” (Department for Digital, Culture, Media & Sport, 2020). Therefore, the United Kingdom committed to a free, open, and secure internet, as well as taking firm actions to make users safer on the internet. The Online Harms White Paper is about establishing an advanced system of accountability to enforce a regulatory duty of care on technology companies, reinforcing them to focus more on the safety of UK users, especially children, and manage negative content or activity on their services. The Online Harms White Paper emphasizes a preventive approach, requiring digital platforms to not only remove harmful content but also to actively implement measures to prevent such content from appearing.

Australia has taken a slightly different approach, focusing on the establishment of an eSafety Commissioner (eSafety) which is “Australia’s civic independent regulator for online safety” (eSafetyCommissioner, n.d.). It also coordinates cybersecurity efforts across Commonwealth departments, authorities, and agencies. Significantly, eSafety has a wide range of functions and powers, including fostering the online safety of Australian users, administering a complaints system for cyberbullying material focused on Australian children, and handling image-based abuse (eSafety Commissioner, n.d.). Additionally, eSafety also evolves “audience-specific content for parents, educators, young people, older Australians, women, and other vulnerable citizens who experience technology-facilitated abuse” (eSafety Commissioner, n.d.). On the other hand, the commissioner has a wide mandate to protect Australians from online harm, with powers to investigate complaints about online abuse, cyberbullying, and illegal content, as well as to enforce regulations against offending platforms. Hence, eSafety demonstrates a proactive strategy that combines education, prevention, and direct intervention against online harms.

Figure 6 Report online harm page (eSafety Commissioner, n.d.)

Both the UK and Australia aim to protect their citizens from online harms, but their approaches and enforcement vary. The UK focuses on regulatory duty of care and regulatory systems which create a legally binding framework. This framework pushes major operational changes at major platforms and technology companies. On the other hand, Australia’s eSafety Commissioner uses regulation, direct intervention, and public engagement to develop a more flexible and adaptive response to new cyber threats. These efforts show the challenges and possibilities of online moderation, in which rapid technological changes are altering national laws (Roberts, 2019). Policymakers in both countries try to seek the balance between protecting and regulating freedom of speech on the Internet.

Case Studies and Real-World Impacts of Policies on Online Child Protection

Protecting children from online harms is one of the most important tasks all over the world. The United Kingdom and Australia have shown great efforts in establishing effective policies on protecting children on the internet. Their different strategies show that different approaches can help protect children from many online abuses including harmful content and cyberbullying.

A key goal of the Online Harms White Paper is to prevent children from seeing inappropriate content, particularly online pornography (Department for Digital, Culture, Media & Sport, 2020). The policy requires websites with adult content to identify users’ age to prevent access by minors. For instance, users on Twitter (X) could select whether to see sensitive content in “Settings and Privacy” and adopt Twitter’s content warnings, meanwhile seeing sensitive images. They could also block sensitive content in search settings. Although privacy issues, this initiative stands for a significant way to decrease kid’s exposure to harmful online content. This means a growing awareness of the need for strong regulation to protect kids from online dangers (McGlynn et al., 2017).

Figure 7 Privacy and safety settings on Twitter (Joshua, 2023)
Figure 8 How to block sensitive content in Twitter search settings (Joshua, 2023)

In Australia, the office of the eSafety Commissioner plays a major role in protecting kids from cyberbullying. Specifically, the eSafety frame establishes a reporting system that allows guardians, kids, and parents to report online harms, involving cyberbullying and hate speech. The design of the system is easy to adopt and makes sure complaints are addressed rapidly. The office of the eSafety Commissioner will work with social media platforms and online services to delete harmful content quickly and decrease the influence of online abuse effectively.

The eSafety Commissioner understands the significance of prevention and, as a result, made many educational plans for communities, educators, kids, and parents. These projects concentrate on digital literacy, verifying cyber risks and utilizing safe online reactions. The eSafety also provides workshops to assist kids obtain the essential knowledge and skills to be safe and responsible online. Moreover, the eSafety Commissioner joins in international collaboration with similar groups worldwide, and non-governmental organizations concentrated on cybersecurity. The purpose of these partnerships is to share optimal practices and cooperate to combat cross-border cyber hazards, which facilitate global standards for the protection of kids online (eSafety Commissioner, n.d.).

Figure 9 eSafety Kids page (eSafety Commissioner, n.d.)

The UK’s website age verification framework solves the matter of limiting access to inaccurate content. It emphasizes the significance of technical solutions in applying cybersecurity methods. On the other side, Australia’s crackdown on cyberbullying through the eSafety Commissioner emphasizes the critical function of regulatory frameworks in establishing a safe online environment for kids. These efforts indicate the good consequences that policy intervention in strengthening children’s Internet safety.

Summary

All in all, the policies of Australia and Britain demonstrate the global problem of establishing an inclusive, respectful, safe cyberspace. The goal of the Online Harms White Paper and eSafety Commissioner is to protect users from harmful fields of the Internet. Both countries’ experiences offer available lessons for other countries facing similar problems, which emphasizes the function of digital platforms as well as the Internet in building an environment where each person, particularly children, could explore and study safely. This raises an essential question: Can we believe platform enterprises to regulate content in the public interest, or should the government be left with that responsibility? (Flew, 2021).

References

Citron, D. K. (2014). Hate crimes in cyberspace. Harvard University Press.

Department for Digital, Culture, Media & Sport. (2020, December 15). Online Harms White Paper. GOV.UK. Retrieved April 09, 2024, from https://www.gov.uk/government/consultations/online-harms-white-paper/online-harms-white-paper

Do Something. (n.d.). 11 Facts about cyberbullying. https://www.dosomething.org/us/facts/11-facts-about-cyber-bullying#fnref5

eSafetyCommissioner. (n.d.). Cyberbullying. https://www.esafety.gov.au/key-topics/cyberbullying

eSafetyCommissioner. (n.d.). eSafetykids. https://www.esafety.gov.au/kids

eSafetyCommissioner. (n.d.). Online hate speech – report. https://www.esafety.gov.au/sites/default/files/2020-01/Hate%20speech-Report.pdf

Flew, T. (2021). Hate Speech and Online Abuse. In Regulating Platforms. Cambridge: Polity, pp. 115-118

Hollenbaugh, E. E., & Everett, M. K. (2013). The Effects of Anonymity on Self-Disclosure in Blogs: An Application of the Online Disinhibition Effect. Journal of Computer-Mediated Communication, 18(3), 283–302. https://doi.org/10.1111/jcc4.12008

Howard, J. W. (2019). Free Speech and Hate Speech. Annual Review of Political Science, 22(1), 93–109. https://doi.org/10.1146/annurev-polisci-051517-012343

Joshua, C. (2023). What is sensitive content on Twitter? Avast. https://www.avast.com/c-how-to-see-sensitive-content-on-twitter

McGlynn, C., Rackley, E., & Houghton, R. (2017). Beyond ‘Revenge Porn’: The Continuum of Image-Based Sexual Abuse. Feminist Legal Studies, 25(1), 25–46. https://doi.org/10.1007/s10691-017-9343-2

Pew Research Centre. (2021, January 13). The State of Online Harassment. https://www.pewresearch.org/internet/2021/01/13/the-state-of-online-harassment/

Roberts, S. T. (2019). Behind the Screen: Content Moderation in the Shadows of Social Media (1st ed.). Yale University Press.

Be the first to comment

Leave a Reply