
Introduction
The Internet has revolutionised how people worldwide communicate and share information instantly. Statistics for 2020 alone show approximately 4 billion internet users worldwide, an increase of 122% in ten years (ITU, 2020). The online platform has become essential for communication, connection and freedom of expression. Undoubtedly, it has had numerous positive effects on society, for example, by making communication and knowledge sharing on important topics such as disease treatment and disaster relief more efficient. However, the Internet has also brought new risks and potential harms. With the ability to communicate with a wide range of audiences, there has been a shift in the way we engage with politics, public affairs and each other; media platforms, mainly social media, have become a hotbed for spreading hate messages and inciting violence, amplifying the impact of these messages to an unprecedented The effect of these messages has been magnified to an unparalleled degree.
Individual digital inhabitants like us have gathered into giant digital audiences for platforms to flourish; we are creators and enjoyers of platform content, yet we may sometimes become online harmers and victims, and we live in an era characterised by a platformed Internet – thinking about how to frame it from our perspective Regulation and governance have significant implications.
“Online harm and hate speech are not as simple as we think – let’s identify them.”
Online Harm is happening everywhere, all the time, sometimes gradually…..
To give you a clearer picture of the issue of cyber-harm, I would like to start by asking you a few questions:
Have you ever experienced online victimisation?
Have you ever viciously attacked someone on social media?
Are you aware of the potential for Harm from your comments? (even if you don’t mean to do so)
Have you ever known someone (friend, family member, or even celebrity) who has received a mean comment or offensive message?
Upon reflection, you will likely recall many cases of online Harm.
YES!
When many people gather on social media to target one person, they can hurt them badly. And it’s not just the people sending the mean messages which can see it – millions of others can too! This makes the damage worse than if someone just said something mean to your face because it’s not limited to one place or person’s voice.
The Australian Government (n. d.) gives a precise definition of online Harm:
“Online harms are activities that take place wholly or partially online that can damage an individual’s social, emotional, psychological, financial or even physical safety.”
The UK’s Online Harms White Paper (Department for Digital, Culture, Media and Sport, & Home Office, 2019) identifies a range of different types of online Harm:

Users talk and interact with each other by generating and sharing content through social media platforms like Facebook, Instagram and Twitter, which makes it possible for an individual’s voice to be amplified infinitely on these platforms; also, most social media companies’ business models are built on attracting attention, so offensive and hurtful speech is more likely to attract attention, and it has a louder on social media than on traditional mass media – and online Harm is sometimes brought to you without you even realising it. One of the major challenges is hate speech, which substantially negatively impacts society because it incites violence in reality (O’Regan, 2020).
We must give a precise definition to hate speech to help us identify it:
Hate speech expresses, encourages, provokes or incites hatred against groups distinguished by a particular characteristic or set of characteristics, such as race, ethnicity, gender, religion, nationality and sexual orientation (Parekh, 2012). There are three fundamental characteristics of hate speech:
- Targets specific or easily identifiable individuals, sometimes a group of people based on an arbitrary and normatively unrelated characteristic.
- Stigmatises the target group by implicitly or explicitly assigning to its qualities that are widely perceived as highly undesirable.
- The target group is seen as an undesirable presence and the object of legitimate hostility.
There have been many high-profile cases of hate speech on social media recently. Here are a few examples:
Leslie Jones: Following the release of the Ghostbusters reboot in 2016, actress and comedian Leslie Jones was subjected to racist and sexist abuse on Twitter(Woolf, 2016). As a result of the abuse, Jones temporarily stepped down from the platform.

An Australian far-right extremist shot and killed 51 people in two mosques in Christchurch in 36 minutes on 15 March 2019, the deadliest terrorist attack in New Zealand history(Macklin, 2019). He broadcast his atrocity live on Facebook and highlighted the fatal weakness of such platforms when faced with the viral spread of ultra-violent content.

“Digital media let us live in an age where we can easily spread any message and content by manipulating our mobile phone screens without thinking about the consequences – what has led to this”
Freedom of expression is the right to express one’s ideas, opinions, and beliefs without censorship or restraint. According to the United Nations Universal Declaration of Human Rights(1948), Article 19: “Everyone has the right to freedom of opinion and expression; this right includes freedom to hold opinions without interference and to seek, receive, and impart information and ideas through any media and regardless of frontiers.” Deibert et al. (2008) provide a perspective on freedom of expression online, which they argue is not only a legal right but also a key component of democratic governance and human development. They say that online freedom of expression facilitates the exchange of ideas, makes collective action possible and promotes the development of a vibrant public sphere (Deibert et al., 2008).
BUT can speech on the Internet really be uncensored and unrestricted?
No! We need protection!
Whether freedom of expression on the Internet can be completely free from censorship and restrictions, has long been complex and challenging. Some argue that absolute freedom of expression is essential for a democratic society. In contrast, others believe that there must be restrictions to protect individuals from harmful content and hate speech, as the right to freedom of expression granted to users by platforms provides substantial free play for some harm-doers as well as extremists, prompting them to spread harmful content, such as hate speech, like a virus on the Internet.
Academic research has shown that censorship and restrictions on freedom of expression on the Internet can have both positive and negative effects. For example, while censorship can be used to prevent harmful content, such as child pornography and terrorist propaganda, it can also be used by governments to silence dissent and control the flow of information (Parekh, 2012; Tkacheva et al., 2013). For example, criticism of the Government is a hazardous activity in Egypt. In 2018 alone, authorities arrested at least 113 people for a range of absurd reasons, including satire, tweeting, supporting football clubs, denouncing sexual harassment, editing films and giving interviews; those arrested were accused of being “members of terrorist organisations” and “spreading false news”. Detained for months without trial, those who eventually faced problems were sentenced by military courts, despite the inherent unfairness of military trials of civilians in Egypt and elsewhere (Amnesty International, n.d.). When governments use a combination of technical and legal measures to restrict freedom of expression, such censorship and regulation are harmful to individuals and undermine the development of a vibrant civil society.
Given all of the above, on the one hand, social media platforms like Facebook and Instagram are currently under increasing pressure to regulate content and limit hate speech to protect individuals from harmful content – but the line between acceptable and unacceptable speech is often blurred, and censorship can have unintended consequences for free expression.
On the other hand, governments have identified several rather challenging questions:
Should online expression be regulated? If so, how should legislation be introduced to curb these excesses?
How can freedom of expression and online regulation be balanced?
“How to balance freedom of expression and platform regulation is a huge challenge – Moderation is at the heart of digital platform regulation.”
Regulating cyberspace first requires drawing a line between legitimate freedom of expression and harmful speech.
Academics have demonstrated the diversity of hostile online behaviour through the many descriptors that appear in scholarly works, with studies focusing on topics such as extremist speech; euphoric speech; offensive speech; dangerous speech; cyberbullying; inflammatory speech; and human flesh searching (Butler, 2021; Solopova, Scheffler, & Popa-Wyatt, 2021; Hinduja & Patchin, 2009; Chen, Chan, & Cheung, 2018).
In the case of human flesh search, Chen, Chan, and Cheung (2018) give a clear delineation in their paper – human flesh search is the search for and deliberate disclosure of private information about a person on the Internet without their consent, often used to punish and incite online violence. Almost all types of disclosure of personal data can lead to negative emotions, including depression, anxiety and stress. The study starts with a call for further research on leaks to develop comprehensive cyber violence prevention programmes (Chen et al., 2018).
Also, for example, in Hinduja and Patchin’s (2009) academic study of cyberbullying, they define it as “intentional and repeated harm caused through the use of computers, mobile phones and other electronic devices”, and they capture the most critical elements as follows:
- Intentional: the behaviour must be deliberate and not accidental.
- Repetitive: bullying reflects a pattern of behaviour, not just an isolated incident.
- Harm: the target must be aware that Harm has been done.
- Computers, mobile phones and other electronic devices.
Also, from a policy development perspective, some national laws apply to draw the line between harmful speech and public interest issues, with each country having a broader coverage of harmful speech, such as those in Germany, New Zealand and Ireland (Smith, 2019; Network Enforcement Act, 2017; Harmful Digital Communications Act, 2015).
For example, Ireland’s 1989 Prohibition of Incitement to Hatred Act provides a clear definition of hate speech and how it is disseminated, stating that it is an offence to transmit threatening, insulting or abusive material that is intended or likely to ‘incite’ hatred against a group of people because of their colour, nationality, sexual orientation, race, ethnicity, religion or national origin and membership of a tourist community; such communication and dissemination may be in writing, orally, by radio or as part of a recording (Prohibition of Incitement to Hatred Act, 1989). The descriptions in this law are equally applicable to defining and identifying the online communication of hate speech today.
However, please note!
In the digital age, with the blurring of boundaries in virtual space, reaching a consensus on what is vital. It is not acceptable online behaviour – and this consensus can also prevent specific countries from using online surveillance as a tool for a political dictatorship. As proposed by the Interparliamentary Coalition for Combating Antisemitism (2013), this will require sustained joint cooperation and effort from academics, governments (with legislative support) and digital media companies.

Furthermore, we should agree with what Gillespie (2018) proposes: the governance and regulation of digital platforms have Moderation as a core element.
He argues that platforms must exercise Moderation to protect users from other users or groups and to remove offensive, illegal or despicable behaviour while showing their best to new users, advertisers, partners and the general public so that the very matter of Moderation shapes social media platforms as institutions, tools and cultural phenomena (Gillespie, 2018).
Today, platform moderation is increasingly becoming a familiar and accepted way of dealing with user-generated content and a mundane feature of the digital cultural landscape.
However, Moderation is challenging to examine because it is easy to overlook – and this is actually intentional – the tendency of social media platforms to trumpet the amount of content they provide while keeping quiet about how much content the platform removes; and their constant emphasis on being mere custodians of content, while downplaying their role as interveners in the platform’s content identity downplayed – these interventions are not limited to Moderation, suspension, deletion, categorisation of specific content, etc. (Gillespie, 2018). Let us look at the history of this, indeed, where tech companies have argued in favour of freedom of expression. It is clear that the outcome of freedom of expression can amplify the value of platforms as providers of content, which can largely lead to a vast user base and revenue, and that these companies have been reluctant to take responsibility for the user-generated content that appears on their platforms. A telling example of this occurred in September 2021, when Frances Haugen, a former Facebook employee, stated that the company’s security team was significantly under-resourced and that they were silent and failed to act when dealing with misinformation affected the company’s business interests (Spring, 2021).
Taken together, content control is a challenging and complex social endeavour, so we as users and audiences must understand how platforms are moderated, by whom, and for what purpose; more importantly, we need to focus on a broader examination of the responsibilities of media and consider their greater obligations to the public.
Furthermore, Gillespie (2018) argues that if social media platforms need to reflect substantively not only on their approach but also on how platforms view themselves and their users, it offers several suggestions for the enhancement of platform moderation efforts, such as
- Protects users as they move across platforms
- Rejecting popular economics
- Assigning regulating bodies, not just jobs
- Protecting users as they move across platforms
- Designing for deliberate and actionable transparency
- Putting real diversity behind the platform
Reference
Amnesty International. (n.d.). Freedom of expression [Online]. Retrieved April 11, 2023, from https://www.amnesty.org/en/what-we-do/freedom-of-expression/
Australian Government. (n. d.). Online Harms & Safety [Online]. Retrieved April 6, 2023, from https://www.internationalcybertech.gov.au/our-work/security/online-harms-safety
Butler, J. (2021). Excitable speech : a politics of the performative (New edition.). Routledge. https://doi.org/10.4324/9781003146759
Chen, Q., Chan, K. L., & Cheung, A. S. Y. (2018). Doxing Victimization and Emotional Problems among Secondary School Students in Hong Kong. International journal of environmental research and public health, 15(12), 2665. https://doi.org/10.3390/ijerph15122665
Deibert, R., Palfrey, J., Rohozinski, R., Zittrain, J., Stein, J. G., Faris, R., Villeneuve, N., Anderson, R., Murdoch, S., & Rundle, M. (2008). Access Denied : The Practice and Policy of Global Internet Filtering. The MIT Press. https://doi.org/10.7551/mitpress/7617.001.0001
Department for Digital, Culture, Media and Sport, & Home Office. (2019). Online harms white paper. https://www.gov.uk/government/consultations/online-harms-white-paper
Gillespie, T. (2018). All Platforms Moderate. In Custodians of the Internet (pp. 1–23). Yale University Press. https://doi.org/10.12987/9780300235029-001
Harmful Digital Communications Act 2015, No. 63, 2015 (New Zealand).
Hemphill, S. A., Tollit, M., Kotevski, A., & Heerde, J. A. (2015). Predictors of Traditional and Cyber-Bullying Victimization: A Longitudinal Study of Australian Secondary School Students. Journal of interpersonal violence, 30(15), 2567–2590. https://doi.org/10.1177/0886260514553636
Hinduja, S., & Patchin, J. W. (2009). Bullying beyond the schoolyard: preventing and responding to cyberbullying. Corwin Press. https://doi.org/10.1017/CBO9781139042871.006
International Telecommunication Union. (2020). The World Telecommunication/ICT Indicators Database, 23rd edition. https://www.itu.int/pub/D-IND
Interparliamentary Coalition for Combating Antisemitism. (2013). ICCA Report, 2013. Retrieved from https://www.adl.org/sites/default/files/documents/assets/pdf/press-center/ICCA-Report.pdf
Macklin, G. (2019). A sign is seen outside an Islamic community center near Masjid Al Noor in Christchurch, New Zealand, after the mosque attacks [Image]. Retrieved from https://ctc.westpoint.edu/christchurch-attacks-livestream-terror-viral-video-age/
Macklin, G. (2019, July). The Christchurch Attacks: Livestream Terror in the Viral Video Age. CTCSENTINEL, 12(6), pp.18-29. https://ctc.westpoint.edu/christchurch-attacks-livestream-terror-viral-video-age/
Network Enforcement Act. (2017). BGBI. I S. 2615 (Germany).
Online harm [Image]. (2022). Retrieved from https://www.spectator.co.uk/wp-content/uploads/2022/02/Little_Brother_SEnew.jpeg
O’Regan, K.(2020). Hate speech regulation on social media: An intractable contemporary challenge [Web log post]. Retrieved from https://researchoutreach.org/articles/hate-speech-regulation-social-media-intractable-contemporary-challenge/
Parekh, B. (2012). Is There a Case for Banning Hate Speech? In The Content and Context of Hate Speech (pp. 37–56). Cambridge University Press.
Parekh, B. (2012). Is There a Case for Banning Hate Speech? In The Content and Context of Hate Speech (pp. 37–56). Cambridge University Press. https://doi.org/10.1017/CBO9781139042871.006
Price, L. (2021). Platform responsibility for online harms: towards a duty of care for online hazards. The Journal of Media Law, 13(2), 238–261. https://doi.org/10.1080/17577632.2021.2022331
Prohibition of Incitement to Hatred Act. (1989). No. 19/1989 (Ireland).
Smith, P. (2019). The challenge of drawing a line between objectionable material and freedom of expression online [Online]. Retrieved April 14, 2023, from https://theconversation.com/the-challenge-of-drawing-a-line-between-objectionable-material-and-freedom-of-expression-online-108764
Solopova, V., Scheffler, T., & Popa-Wyatt, M. (2021). A Telegram Corpus for Hate Speech, Offensive Language, and Online Harm. Journal of Open Humanities Data, 7. https://doi.org/10.5334/johd.32
Spring, M. (2021). Frances Haugen says Facebook is ‘making hate worse’ [Online]. Retrieved April 14, 2023, from https://www.bbc.com/news/technology-59038506
Terry, F. (2021) Regulating Platforms. Cambridge: Polity, pp. 91-96.
The challenge between free speech and hate speech [Image]. (2019). Retrieved from https://theconversation.com/the-challenge-of-drawing-a-line-between-objectionable-material-and-freedom-of-expression-online-108764
Tkacheva, O., Schwartz, L. H., Libicki, M. C., Taylor, J. E., Martini, J., & Baxter, C. (2013). The Internet in China: Threatened Tool of Expression and Mobilization. In Internet Freedom and Political Space (pp. 93–118). RAND Corporation. http://www.jstor.org/stable/10.7249/j.ctt4cgd90.12
United Nations. (1948). Universal Declaration of Human Rights. Retrieved from https://www.un.org/en/universal-declaration-human-rights/index.html
What Leslie Jones said in the interview [Image]. (2016). Retrieved from https://techcrunch.com/2016/07/19/leslie-jones-twitter-harassment/?guccounter=1&guce_referrer=aHR0cHM6Ly93d3cuZ29vZ2xlLmNvbS8&guce_referrer_sig=AQAAAG41tntYRj4RquAO9vTPyXeMrnwT6e35ex3_nGOEB6cRNUO7a5Gj1LltxE6DhgoY6pd-m3h1XsQSEyCJ1Q4gjh6ogsfQ0n2Ez3luVLiTnBUMwKlI9uSi6zzhJRn4F4aeNTCeaHnjr-nDaMhhXoXv0rAFdRqEWIo1YQSXcOPILhtE
Woolf, N. (2016, July 19). Leslie Jones bombarded with racist tweets after Ghostbusters opens [Web log post]. Retrieved from https://www.theguardian.com/culture/2016/jul/18/leslie-jones-racist-tweets-ghostbusters
Be the first to comment