
Do you really understand the online harm caused by internet abuse?
Online harms?
Since the boom of digital platforms and mobile applications, the number of people becoming internet users has rapidly increased worldwide. But the accompanying problems on internet and platforms have become gradually apparent, such as online harms. Online harms as a term I believe you may not sound unfamiliar with because in the era we are living in, the internet has become essential, it massively revolutionised the way we communicate, work, and interact with each other, so you may have been involved in, seen or heard about online harms. However, you may not realise the variety of online harms and the complexity of regulating online harmful content. In general insight, online harms refer to negative or harmful behaviour that occurs in the online environment as a result of internet abuse, using the internet to bully and threaten others. To be more specific, online harms are behaviours or activities that take place wholly or partially online that can damage an individual’s social, emotional, psychological, financial or even physical safety (Wegge, Vandebosch, Eggermont & Pabian, 2016). Such behaviour or activities can be seen as hate crimes and hate speech, online harassment, cyberbullying, cybercrime, online abuse, etc. Moreover, Flew (2021) defines online harms as ‘any harmful content or activity that is facilitated by digital platforms or service and that has negative effects on individuals, groups, or society at large’, and notes that online harms can take various forms. From my perspective, I realise online harms are associated with online content that could generate real harm to vulnerable groups, including children, young people, and people living with disabilities or mental health issues, and may threaten people’s privacy and personal safety.
The people who are more likely at risk of online harm
Due to a range of intersectional factors, some individuals and communities are more likely to be targeted online and are at risk of serious harm. These factors include but are not limited to race, religion, cultural background, gender, sexual orientation, disability, and mental health conditions. The risk can also be increased because of situational vulnerabilities, such as being impacted by domestic and family violence, age, relationship problems, financial difficulties, and language barriers (Sinpeng, Martin, Gelber & Shields, 2021). For example, an older person who is not familiar with digital technologies may become more vulnerable to cybercrime such as online scams and online fraud, harming people financially through digital platforms. Additionally, the disturbing trend can also be seen in Aboriginal and Torres Strait Islander peoples and LGBTIQ+ communities. According to the research conducted by eSafety, an independent regulator for online safety in Australia, it illustrates that Aboriginal and Torres Strait Islander peoples and LGBTIQ+ individuals are more likely to suffer online harms. Aboriginal and Torres Strait Islander peoples experience online hate speech at more than double (33%) the national average in Australia (14%). Similar to LGBTIQ+ people, 30% of whom have experienced hate speech compared to 14% of the rest of the population (eSafety Commissioner).
Varied effects and consequences of online harms
You may know that online harms can pose risks and harm to people, but in fact, they produce varied negative effects and consequences to people in diverse aspects. The scope of online harms is vast and complex, as harmful content and behaviours can be targeted at different groups and have varying levels of impact on individuals and society (Flew, 2021). Online harms can have negative impacts on individuals’ social and emotional well-being. Particularly with regards to cyberbullying, including the use of digital communication technologies to intentionally and repeatedly harass, humiliate or intimidate someone, victims may experience feelings of guilt, helplessness and fear, which may have a detrimental effect on people’s confidence and self-esteem (Sujarwoto, Tampubolon & Pierewan, 2019). Online harassment may lead to victims being intimated and harassed, which may reduce their sense of security and decrease their likelihood of interacting with other users of the platforms. Disinformation can also cause individuals to feel confused, frustrated, and distrustful, resulting in negative impacts on victims’ mental and emotional well-being.
Furthermore, the article written by Ariadna Matamoros-Fernández explores the role of social media platforms in the spread and amplification of racism. The article highlights several consequences and effects of online harms associated with racism, for example, the normalization of racist discourse on online content. It argues that online platforms can normalise racist discourse by providing a space for it to thrive and spread through online harmful content, leading to the acceptance of racist attitudes and beliefs among users (Matamoros-Fernández, 2017). Also, harmful content related to races on online platforms may amplify racist messages, making them more visible and accessible to a wider audience and thus facilitating the spread of hate speech targeting marginalised groups. As a result, online racism as online harm can create a polarised environment in which individuals with different views become increasingly isolated from each other, leading to further division and conflict and even social cohesion.
The death of Charlotte Dawson

One well-known case of online harms in Australia is the tragic story of Charlotte Dawson who was subjected to sustained online abuse and harassment in 2012. Charlotte Dawson was a New Zealand-Australian television personality. She was widely known for her roles as host of Getaway in New Zealand, as a host on The Contender Australia and as a judge on Australia’s Next Top Model in Australia. Unfortunately, Dawson was targeted by a group of online trolls who bombarded her Twitter account with abusive messages and death threats, telling her to ‘go hang yourself’ and other cruel insults. Reportedly, the online abuse that happened on Charlotte Dawson began after she spoke out against a 17-year-old girl who had sent her a series of abusive messages on Twitter. She reported the incident to the police and made the teenager’s identity public, but this only seemed to escalate the abuse she was receiving online. After a few weeks, Dawson received thousands of abusive messages from trolls online, some of which included graphic descriptions of violence and sexual assault. The experience of Dawson sparked a national conversation about online harms in Australia, particularly cyberbullying, with an increasing number of people calling for stronger and more effective laws to address the issue (Turton-Turner, 2013). Eventually, Charlotte Dawson passed away in 2014, reportedly because of depression and other mental health problems that were exacerbated by the cyberbullying she had suffered.
Based on the real case above, the death of Charlotte Dawson highlights the phenomenon and serious impact of online harms, particularly cyberbullying, on the mental health and well-being of victims. The online abuse against Charlotte Dawson from the online trolls, including graphic descriptions of violence and sexual assault, death threats, insults, and personal attacks, should be recognised as typical online harms because these behaviours are facilitated by online platforms and damage Dawson’s social, emotional, and psychological safety. Besides, the effect of cyberbullying on mental health shown in the example cannot be ignored and neglected. There are several studies indicate that exposure to cyberbullying is associated with higher levels of depression, anxiety, and suicidal ideation, particularly among vulnerable individuals with pre-existing mental health issues (Gavaghan & King, 2013). Drawing on Charlotte Dawson’s experience, it is easy to realise that the combination of pre-existing mental health issues and the sustained online abuse and harassment likely contributed to her decision to take her own life. The tragedy of her death spotlighted the urgent need to address online harms and protect the individual from the negative consequences and effects of nowadays digital technology. Her death is, therefore, a stark reminder of the devastating impact online abuse can have on users, people, and society, and the critical reequipment for more effective measures and regulations to prevent and address online harms in the network environment in Australia.
Current regulation of online harms in Australia and the facing challenge
The Online Safety Act 2021

The Online Safety Act 2021 is one of the effective regulations of digital technologies in the Australian context. The Online Safety Act 2021 is an Australian law that was passed by the Australian Parliament in February 2021. The Act strengthens Australia’s protections against online harm in order to keep pace with abusive activities and behaviour and toxic content on online platforms. It is a new legislation that is designed to improve online safety for Australians to tackle the modern time of rapid change online and abusive or harmful online content, including cyberbullying, online abuse, and terrorist material. Under The Online Safety Act 2021, any social media companies and other online service providers are required to delete and remove harmful content and their failure to do so can result in substantial fines and penalties, such as injunctions and other legal actions against individuals or companies that breach the act (Brendan Scott, 2021). In addition, there are several main changes that can be seen in this new Act. For example, The Online Safety Act 2021 creates a world-first Adult Cyber Abuse Scheme for Australians aged 18 and older. This scheme is committed to giving support and assistance to young Australians who experience serious forms of online abuse. It aims to address the increasing problem of online abuse and harassment by providing a range of services and support to victims. Flew (2021) argues that online harms are not only a social and moral issue but also a political and legal issue, requiring a coordinated effort from various stakeholders to address. It is for this reason that the scheme empowers victims of online harms to apply a removal notice to the found harmful materials. When the Online Safety Commissioner is notified, they will have the power and authority to issue such notices to websites, social media platforms, and other online services to remove aggressive and harmful content from the internet. Therefore, the release of the legislation makes social media companies and other online service providers in Australia more accountable for the online safety of the people who use their provided services.
Freedom of expression or interference with harmful content?

Nevertheless, regulating harmful content on digital platforms and mobile applications is still facing challenges. Striking the right balance between freedom of expression and protecting users from online harms is arguably one of the challenges. There is a tension between protecting individuals from harmful content and upholding the principle of freedom of speech and expression because both values are significant and sometimes come into conflict with each other in the context of digital platforms. Freedom of speech and expression is a fundamental human right that allows individuals to express their ideas and opinions and participate in public debate and discourse. And social media and platforms play the role in promoting freedom of expression because they are powerful tools of communication, allowing each individual from all sides of the platform to express themselves. However, freedom of expression through digital technologies medium can also be used to justify harmful behaviours and activities online, including hate speech, cyberbullying, and online harassment, which may raise harm to certain individuals or communities (Sander, 2020). And intervening with or removing content can be considered as affecting the rights to freedom of expression and privacy of the individual. Moreover, the conflict of interest of online platforms may increase the difficulties of implementing policies as well. Because online platforms need to provide an open and diverse platform for free speech and expression to users while also protecting users from online harmful and aggressive content. This can be a difficult balance to strike, especially when the definition of what constitutes harmful content is often controversial and varies across cultures and nations (Jacob Rowbottom, 2014). So, it is hard and challenging for the regulator to keep the appropriate balance between these competing values and ensure that users are protected from harmful content without unduly restricting freedom of expression.
Conclusion
In conclusion, this paper has outlined and analysed online harms referring to harmful behaviours that occur in the online environment as a result of internet abuse, affecting an individual’s safety. Also, this paper has argued that online harms can take various forms of behaviours and activities, such as hate speech, online harassment, cyberbullying, cybercrime, and online abuse, which are related to online content on digital platforms and are more likely to generate real harm or trauma to vulnerable users. In terms of online harms, certain individuals and communities are discovered as more likely to be targeted online and are at risk of serious harm due to intersectional factors and situational vulnerabilities, including race, cultural background, sexual orientation, domestic and family violence, and language barriers. Besides, the effects and consequences of online harms are vast and complex, and they can hurt individuals’ social, emotional, and mental well-being. The introduced case of Charlotte Dawson illustrates the tragic story as a result of being targeted for sustained online abuse and harassment that eventually led to her suicide and sparked a national conversation about online harms in Australia. This paper also introduced the Online Safety Act 2021 which is a new legislation of Australia designed to tackle online harms by making social media companies and other online service providers more accountable for online safety. However, moderating online content can be seen as a challenge because it is hard for a regulator to strike the right balance between freedom of expression and protecting users from online harms.
Reference list
Brendan Scott. (2021). Basic online safety expectations under the “Online Safety Act 2021” (Cth). Internet Law Bulletin, 24(5), 100–102.
Flew, T. (2021). Regulating platforms. Polity Press.
Gavaghan, C., & King, M. (2013). Reporting suicide : safety isn’t everything. Journal of Primary Health Care, 5(1), 82–85. https://doi.org/10.1071/hc13082
How the online safety act supports those most at risk. eSafety Commissioner. (n.d.). Retrieved April 5, 2023, from https://www.esafety.gov.au/communities/how-online-safety-act-protects-those-most-at-risk
Jacob Rowbottom. (2014). In the shadow of the big media: Freedom of expression, participation and the production of knowledge online. Public Law, 3, 491–511.
Matamoros-Fernández, A. (2017). Platformed racism: the mediation and circulation of an Australian race-based controversy on Twitter, Facebook and YouTube. Information, Communication & Society, 20(6), 930–946. https://doi.org/10.1080/1369118X.2017.1293130
Sander, B. (2020). Freedom of expression in the age of online platforms: The promise and pitfalls of a human rights-based approach to content moderation. Fordham International Law Journal, 43(4), 939–.
Sinpeng, A., Martin, F. R., Gelber, K., & Shields, K. (2021). Facebook: Regulating Hate Speech in the Asia Pacific. Department of Media and Communications, The University of Sydney.
Sujarwoto, S., Tampubolon, G., & Pierewan, A. C. (2019). A Tool to Help or Harm? Online Social Media Use and Adult Mental Health in Indonesia. International Journal of Mental Health and Addiction, 17(4), 1076–1093. https://doi.org/10.1007/s11469-019-00069-2
Turton-Turner, P. (2013). Villainous avatars: the visual semiotics of misogyny and free speech in cyberspace. Forum on Public Policy.
Wegge, D., Vandebosch, H., Eggermont, S., & Pabian, S. (2016). Popularity Through Online Harm: The Longitudinal Associations Between Cyberbullying and Sociometric Status in Early Adolescence. The Journal of Early Adolescence, 36(1), 86–107. https://doi.org/10.1177/0272431614556351
Be the first to comment