With the speedy development of digital technology, the Internet has gradually become an essential part of people’s lives in contemporary society. While people use the Internet social platform for information acquisition and communication, its application is becoming increasingly widespread and it gradually becomes a double-edged sword. Some users do not regulate the use of the Internet, posting harmful comments and hurting others intentionally or inadvertently, leading to many adverse effects, such as forcing victims to commit suicide and have psychological problems.Therefore, this blog will discuss some important issues and impacts in Internet culture and governance with the theme of online harm, and is dedicated to help more people understand this area and protect themselves.
What is online harms?
Online harms are behaviours that might jeopardise a person’s social, mental, psychological, financial, or even bodily safety that occur entirely or in part online (DFAT, 2020). Online harms are common on social media and occur in many different forms, such as spreading false content, posting abusive and discriminatory comments, and encouraging self-harming behavior (Godoy et al., 2023). These negative behaviors have the potential to ruin a person’s life and leave behind psychological trauma that is difficult to eradicate. The Australian Institute of Health and Welfare (AIHW) (2021) survey illustrates that 44% of every ten young people aged 12-17 had at least one negative online experience in the six months to 2020; and in 2017, females aged 14-17 experienced receiving sexually explicit text messages 13% more than 13% more males. These numbers reflect the high possibility of online harm to young people.
Wegge (2016) reflects that bullying using online methods can gain higher popularity in early adolescence compared to traditional bullying methods. Bullies can publish rumours through social networks to attract agreeers and thus reap more attention. Cyberbullying spreads information faster and gains more potential bystanders, thus influencing social status (Wegge et al., 2016). However, there is a strong connection between engaging in harmful online behaviour or becoming a victim of bullying and being a perpetrator (AIHW,2021). This may be due to the fact that people who experience online bullying may develop feelings of sadness and agitation that can prompt thoughts of revenge, leading to serious consequences.
Have you ever been harmed online because of your gender?
Women who are public figures because of their work are easy targets for online victimization (Esafety, 2023). The majority of harassment targets indicated that social media was the most frequently cited venue for harassment and that women were more likely than men to be sexually harassed online (Vogels, 2021). Along with being targeted for abuse because of their colour, nationality, religion, or political views, many female MPs experience internet harassment (House of Commons Home Affairs Committee, 2017, as cited in Flew,2021).Not only that, but the platform algorithm is supporting a gender-specific ‘frequency of victimization’.Reddit, as an open source platform, implicitly reproduces the desires of certain male heterosexuals, white people, etc., ignoring the feelings of marginalized people, and the platform itself is biased by design (Massanari, 2017).
Vogels (2021) learned from her research that women aged 18-34 were twice as likely to be sexually harassed online as men, with their values being 20% and 9%; 52% of women cited receiving pornographic images sent without permission; 36% of women thought it made them feel uncomfortable after experiencing sexual harassment, compared to 16% of men, both in the 2017 survey. Abuse of online platforms can cause women to self-censor, based on fears of privacy and safety, and withdraw from the public eye (Esafety, 2023). Many female platform users quit the social platforms they are associated with and log out, thus resisting the psychological and health burdens that the platforms impose on them. From the above data and analysis, it can be concluded that women experience more harm online than men and that it is accompanied by lasting psychological effects.
Women on Banknotes: The Threat to Criado Perez
Criado Perez is a British journalist and feminist writer who went from being a “misogynist” to finding her own feminism. She grew up aware of the injustice done to women in the world, and from the discovery of cell phones that were not designed for the size of a woman’s hand, she began to think about change (Saner, 2022).
Criado Perez discovered the decision by the Bank of England to replace Elizabeth Fry with the head of Winston Churchill, which meant that women would no longer appear on the back of banknotes, and this made her feel very unjust. She worked to eliminate discrimination and gained the support of many people, even if some did not think it would be useful. Fortunately, she succeeded, and the banknote image was eventually replaced with Jane Austen, but an even greater crisis loomed over her. In July 2013, she received at least 50 rape threats per hour on Twitter as a result of the banknote incident, (Elgot, 2013). She continued to stand her ground after receiving this online harm from the internet social platform and saw it as an inspiration for her behavior. Even though the aftermath of this incident had a profound effect on her psyche, Criado Perez considered it a victory that belonged to her, after the accounts of those who made the threats were blocked.
Leaders such as Twitter and Facebook have made great efforts to develop social media, linking people to the need to interact with each other. They are slowly becoming the focus of lawmakers, regulators, and others around the world (Flew, 2021), but are undoubtedly facing difficult issues as well. What can be done about injustices like the one suffered by Criado Perez? Is it just a matter of taking one step at a time? It is becoming increasingly clear that perhaps today’s Internet era needs a management solution.
So how should it be governed?
John Perry Barlow (1996) argues that the Internet is a new world in which people can say anything, i.e., they can express their opinions anywhere, anytime (as cited in Flew, 2021). If the right to speak is curtailed, it may not lead to the re-emergence of innovative ideas.
Since online harm is sustained by digital technology, if there were no digital platforms, then these would not arise either. Is it time for the Internet to be truly free speech, not caring at all about online harm, and leaving hate speech unregulated? Hate speech discourages participation in collective activities and fosters discord such as discrimination and intimidation (Flew, 2021). If it is left unchecked, then all hell breaks loose. For example, the sensitive media filters introduced by Twitter can be good at filtering out bad speech but may also limit the idea of freedom of expression (Matamoros, 2017).
Article 19 of the International Covenant on Civil and Political Rights (1966) states that everyone has the right to freedom of expression and that freedom should not be interfered with; however, Article 20 also states that incitement to national, religious, racial discrimination and hatred is prohibited (as cited in Flew, 2021). These two articles make people feel very contradictory, if the freedom of speech is said, then what does the regulator care what to say, it is the freedom of people.
So – how to really strike the right balance and, as Flew (2021) says, promote free speech on the one hand and try to narrow the competition for censorship on the other, authoritarianism needs to be modified; find ways to limit and sanction online harm and hate speech.
Reactions from various ways
Take Facebook as an example
Online platforms provide opportunities for multiple industries on the Internet, but the consistency of platforms makes them aggregate only user content rather than produce it by themselves, and platforms need a means of censoring content (Price, 2021). Platforms need a procedure or process to detect if the publisher’s content is compliant and not full of harmful content, such as the rape comments suffered by the mentioned Criado Perez should be filtered out and the user should be alerted that a penalty should be imposed if it is posted again.
Facebook has been controversial for refusing to ban racist interfaces, siding with people who do harm to indigenous communities online, suggesting that the platform prefers the free speech faction of thought (Matamoros,2017). In response to Asian criticism of the political, as well as social and cultural, impact of online harms, and in line with European regulatory measures to control its proliferation, Facebook has attempted in recent years to improve its active program detection algorithms, add human review positions, and enhance its content review policies and subsequent accountability measures (Sinpeng et al.,2021). From hiring marketing experts to reduce the appearance of discrimination to enhancing stakeholder engagement to reduce the appearance of hate speech in religion, race, gender, etc., Facebook has made a lot of efforts, albeit with little success (Sinpeng et al.,2021).
Perhaps it could be a good choice to increase users’ awareness of sensitivity in this area by enhancing knowledge about science. Secondly, since Facebook users come from different countries and have different cultural backgrounds, it is good to hire more linguists and reviewers. Finally, invest more data in AI to help the algorithm master the ability to distinguish between good and bad speech faster.
Each country has a different approach to governance, depending on the national context, and I will use the United States and the United Kingdom as examples of two policies they have enacted. The UK Government’s draft Online Safety Bill builds on the White Paper on Online Harm to prevent users from feeling traumatized by the content generated (Price, 2021). Platforms need to consider the presence of algorithms and conduct risk assessments; Section 230 of the Communications Decency Act of 1996 (United States) states that the platform is not responsible for the content sent by users (Price, 2021). Price (2022) argues that the Draft Online Safety Bill could significantly improve online safety by establishing mechanisms to reduce harmful content, with regulations issued accordingly.
In conclusion, social media platforms have a long way to go in terms of Internet culture and governance, and need to be properly addressed and organized in terms of online harms, taking into account the culture of each country and region, as well as considering the differences in race, gender, religion, etc. As users of social media platforms, it is also important to reduce the number of hurtful statements posted to create a safe and healthy digital age.
Australian Institute of Health and Welfare. (2021). Australia’s youth: Bullying and negative online experiences. Australian Institute of Health and Welfare. https://www.aihw.gov.au/reports/children-youth/negative-online-experiences
DFAT. (2020). Online Harms & Safety | Australia’s International Cyber and Critical Tech Engagement. Internationalcybertech.gov.au. https://www.internationalcybertech.gov.au/our-work/security/online-harms-safety
Elgot, J. (2013). Ugly Rape Abuse After Banknote Campaign. HuffPost UK. https://www.huffingtonpost.co.uk/2013/07/27/twitter-rape-abuse_n_3663904.html
Esafety. (2023). What is online abuse? ESafety Commissioner. https://www.esafety.gov.au/women/women-in-the-spotlight/online-abuse
Flew. (2021). Regulating Platforms.
Godoy, D., Tommasel, A., & Zubiaga, A. (2023). Special issue on intelligent systems for tackling online harms. Personal and Ubiquitous Computing, 27(1), 1–3. https://doi.org/10.1007/s00779-022-01682-0
Matamoros-Fernández, A. (2017). Platformed racism: the mediation and circulation of an Australian race-based controversy on Twitter, Facebook and YouTube. Information, Communication & Society, 20(6), 930–946. https://doi.org/10.1080/1369118X.2017.1293130
Massanari, A. (2017). #Gamergate and The Fappening: How Reddit’s algorithm, governance, and culture support toxic technocultures. New Media & Society, 19(3), 329–346. https://doi-org.ezproxy.library.sydney.edu.au/10.1177/1461444815608807
Price, L. (2021). Platform responsibility for online harms: towards a duty of care for online hazards. Journal of Media Law, 13(2), 238–261. https://doi.org/10.1080/17577632.2021.2022331
Vogels, E. A. (2021). The state of online harassment. Pew Research Center. https://www.pewresearch.org/internet/2021/01/13/the-state-of-online-harassment/
Saner, E. (2022, June 10). “I was a misogynist”: Caroline Criado Perez on finding solutions to living in a man’s world. The Guardian. https://www.theguardian.com/tv-and-radio/2022/jun/10/i-was-a-misogynist-caroline-criado-perez-on-finding-solutions-to-living-in-a-mans-world
Sinpeng, Martin, F. R., Gelber, K., & Shields, K. (2021). Facebook: Regulating Hate Speech in the Asia Pacific. Department of Media and Communications, The University of Sydney. https://hdl.handle.net/2123/25116.3
Wegge, D., Vandebosch, H., Eggermont, S., & Pabian, S. (2016). Popularity Through Online Harm: The Longitudinal Associations Between Cyberbullying and Sociometric Status in Early Adolescence. The Journal of Early Adolescence, 36(1), 86–107. https://doi-org.ezproxy.library.sydney.edu.au/10.1177/0272431614556351