Online Harassment and Hate Speech: The Vicious Villain of the Digital Age

Online Harassment and Hate Speech – An Ever Growing Issue

In 1969, renowned psychologist Philip Zimbardo of Stanford University conducted a social experiment involving a group of female participants. Half of the participants received hooded jackets that covered their name tags, effectively hiding their identities; while the other half had to keep their name tags and faces visible. Every participant was then instructed to administer electric shocks to appointed recipients—it was later found that the individuals in the anonymous group that had their identities concealed administered the most shocks and for the longest amount of time. Zimbardo then proposed the concept of deindividuation, which hypothesises that when people obtain anonymity, they experience a psychological state of decreased self-evaluation and decreased evaluation apprehension, which might encourage antinormative behaviours, including violence (Diener, E., Lusk, R., DeFour, D., & Flax, R., 1980).

Though this experiment was conducted half a century ago, its sentiment still rings true in today’s society, where anonymity is easier to obtain than ever with the help of the Internet. Have you ever commented something mean under a stranger’s post on Facebook? Have you ever openly criticised a celebrity for the way they look or dress on Twitter? Even if you haven’t done any of those things—which, props to you—it’s still very likely that the thought might have crossed your mind, because of how absurdly easy it is to get away with it. Who is going to convict you for your bad actions on the Internet, when all they have of you is a profile picture and some random username like ILovePizza420? Despite this, it’s important to understand that all of the behaviours listed above can be considered online harassment; though the degrees might vary.

Online harassment can be defined as the use of information and communication technologies by an individual or group to repeatedly cause harm to another person—depending on the severity, this can either be illegal and/or harmful. The lines between illegal and harmful content will vary between countries, but the general line of distinction is between forms of online content and speech that can be considered illegal under existing criminal and civil laws, forms that have been increasingly regarded as a threat to individuals or groups on the basis of gender, race, sexuality, religion, or other markers of social identity, and forms regarded as a threat to democracy, social cohesion, public health, or social trust (Flew, 2021).

A rational person might be absolutely baffled by the idea of harassing someone—online or not—but it has been proven again and again that the world we’re living in is a bleak place; cyber harassment, or cyber abuse, is and will continue to be a prevalent issue. A Pew Research Center survey of U.S. adults in September finds that 41% of Americans have personally experienced some form of online harassment. Some of these behaviours include offensive name-calling, stalking, physical threats, or sexual harassment. According to the same survey, 75% of targets of online abuse – equaling 31% of Americans overall – say their most recent experience was on social media (Vogels, E. A., & Atske, S., 2021). So why is this phenomenon happening at such an alarming rate? Once again, we have to circle back to Zimbardo’s deindividuation theory: the veil of anonymity allows bullies to avoid facing their victims, so it requires less courage and provides the illusion that bullies won’t get caught. In addition, one of the main factors that contribute to online harassment is the band-wagon effect. In a cyberbullying context, a deindividuated individual is more likely to conform to a group norm and if this norm is to be abusive and aggressive online, the individual is likely to behave in a similar manner (Heatherington, W. and Coyne, I., 2017). The fact that many incidents of cyber harassment occur within, and often perpetuated by, a younger demographic also indicates that people who engage in this kind of behaviour are mostly too immature to consider the consequences of their actions. The National Council on Crime Prevention reports that in a survey of teenagers, 81% said they believe others cyberbully because they think it’s funny (National Crime Prevention Council, 2007). Since they cannot physically observe the responses of their victims, cyberbullies may not realise how much damage they are doing. 

Case Studies

We are now living in an era where social media seems to have an iron grasp on our daily lives. Though we can’t deny the various benefits it has to offer—easier communication with friends, pictures of silly animals, Minion memes that your grandma shares on her Facebook, etc.—it is crucial to acknowledge that these platforms also have the power to cause immense harm if they were used with malicious intents. One form of online harms that has become extremely ubiquitous in this digital is image-based abuse. According to the Australian eSafety Commissioner, image-based abuse (IBA) happens when an intimate image or video is shared without the consent of the person pictured. This includes images or videos that have been digitally altered (using Photoshop or specialised software). Back in the days, perpetrators used more conventional ways to distribute these images, such as letterboxes or street posters; however, with the growth of the Internet, they could commit the act even faster, and reach even further. (Henry, N., Flynn, A., & Powell, A., 2019). 

Tyler Clementi – retrieved from the Clementi Foundation

This can be observed in the tragic suicide case of Tyler Clementi. Clementi was an American student attending Rutgers University–New Brunswick. On September 19, 2010, Clementi’s roommate, Dharun Ravi, and another student named Molly Wei illegally filmed Clementi kissing another man in his dorm through a webcam, then proceeded to spread the video on Twitter. This didn’t stop here, however; Ravi also planned a second attempt to spy on Clementi on September 21, though this time organised it similarly to a watch party by tweeting this out to his followers,  “Anyone with iChat, I dare you to video chat me between the hours of 9:30 and 12. Yes, it’s happening again.” (Parker, I., 2012). When Clementi came back to his room and noticed Ravi’s webcam pointed at his bed, he unplugged the camera, therefore nullifying Ravi’s plan. After reporting the incident to his resident assistant and a few officials, on September 22, Tyler Clementi jumped to his death from George Washington Bridge. This case caused a lot of public outrage not only within the community of Rutgers University, but far and wide beyond. The incident brought wider nationwide attention to bullying of LGBT youth. Spirit Day, first observed on October 20, 2010 in which people wear purple to show support for bullying victims among LGBT youth, was inspired by Clementi’s suicide. The event received widespread support from GLAAD, Hollywood celebrities and over 1.6 million Facebook users. The Human Rights Campaign, a gay-rights advocacy organisation, released a plan aimed at increasing awareness of gay-related suicide and harassment around the U.S. Tyler Clementi’s case sadly aligns with many statistics that have been established before: according to studies, it was found, overall, that Facebook users in a protected group such as lesbian, gay, bisexual, transgender, queer, intersex and asexual (hereafter LGBTQ+) individuals, may experience a great deal of hate speech content online (Sinpeng, A, Martin, F. R., Gelber, K., Shields, K., 2021). Roughly seven-in-ten have encountered any harassment online and fully 51% have been targeted for more severe forms of online abuse (Vogels, E. A., & Atske, S., 2021). 

The consequences of online harassment are wide-ranging and can have a lasting impact on victims. Victims may experience social isolation, difficulty sleeping, and changes in behaviour. Additionally, victims may be reluctant to seek help or report incidents due to fear of retaliation, further amplifying the harm caused by cyberbullying. In one study investigating cyberstalking with 6,379 social media users, just 2.5 percent reported that cyberstalking had no negative consequences for them. Research conducting a thematic analysis of the impact of cyberstalking described one participant explaining due to the extreme fear she had experienced, her “whole life stopped,” while another became very ill, now suffering from “complex PTSD/depression as a result of the harassment and abuse.” (Stevens, F., Nurse, J. R. C., & Arief, B., 2021).

Regulating hate speech and online harms in the digital age

This brings us to the real question: how do we manage this issue? The cold hard truth is that for as long as social media and the Internet exist, online harassment and hate speech will continue to stay relevant. Though we, as users, can seek out resources and educate ourselves better on the danger of the Internet, platform regulations are also a crucial aspect of preventing online harassment and hate speech. Social media platforms and other online communities have a responsibility to ensure that their platforms are safe and inclusive for all users, and to take steps to prevent and address harassment and hate speech. The focus of regulation should not target content directly but focus on the design of the service, its business model, the  tools  the  platform  provides  for  users,  and  the resources  it  devotes  to  user complaints and user safety, as each of these aspects influences information flows across the platform (Moore & Tambini, 2021).

One important regulation that platforms can implement is a clear and comprehensive terms of service agreement. This agreement should outline the platform’s policies regarding harassment and hate speech, and should provide clear guidelines for what constitutes abusive behaviour. The terms of service agreement should also outline the consequences for violating these policies, such as account suspension or permanent ban.

Platforms can also implement moderation policies that prioritise the removal of harmful content. This can include automated tools that detect and remove hate speech, as well as human regulators who monitor online conversations and remove harmful content. However, it is worth keeping in mind that it is unreasonable to expect the online harms regulator to become expert in all regulatory fields to deal with these sector-specific issues or, indeed, to expect platforms to develop an appreciation of risks in relation to some highly specialist forms of content (Moore & Tambini, 2021). Another important regulation is the use of reporting tools. Platforms should make it easy for users to report abusive behaviour, and should respond quickly and effectively to reports of harassment and hate speech. This can include providing users with a clear reporting process, such as a reporting button or form, and ensuring that reports are reviewed promptly by trained moderators. Platforms can also implement community guidelines that promote respect, diversity, and inclusivity. These guidelines should be clear and comprehensive, and should be enforced consistently and fairly. They should include a definition of harassment and hate speech, as well as examples of prohibited behaviour. Platforms should also provide users with education and resources on how to recognise and respond to harassment and hate speech.

On top of that, platforms can use technology to prevent and address online harassment and hate speech. This can include tools that filter out hate speech or that allow users to block or mute other users who engage in abusive behaviour. Artificial intelligence and machine learning algorithms also come in hand, regarding the detection and removal of harmful content. Yet again, it is important to keep in mind that platforms must still balance the need to prevent harassment and hate speech with the need to protect free speech. While they have a responsibility to ensure safety and inclusivity among their users, they must also respect users’ right to express themselves online. In her article Hate speech online: an (intractable) contemporary challenge?, author Catherine O’Reagan noted that one of the biggest challenges for the global protection of freedom of speech is that there is deep disagreement as to what freedom of speech requires. And one of the core sources of disagreement is what we should do about speech that is hateful (O’Reagan C., 2018).

We’re not exactly inculpable…

After all, as humans, we’re prone to make mistakes. Social media is a resourceful outlet for our emotions and needs to connect with others; however, sometimes this freedom can turn awry and create an echo chamber of hate and negativity. Though most platforms nowadays are trying their best to minimise online harm by installing regulatory policies, it’s worth noting why these policies have to exist in the first place. We, as users, more than any companies or corporations, should acknowledge our responsibility in keeping online communities safe for others. The next time you go onto social media, think long and hard about the real people you’re interacting with behind the screen–rather than just their icons and usernames.


Aim, S., Martin, F. R., Gelber, K., & Shields, K. (2025). Facebook: Regulating Hate Speech in the Asia Pacific. In Department of Media and Communications, The University of Sydney.

Diener, E., Lusk, R., DeFour, D., & Flax, R. (1980). Deindividuation: Effects of group size, density, number of observers, and group member similarity on self-consciousness and disinhibited behavior. Journal of Personality and Social Psychology, 39(3), 449–459.

Flew, T. (2021). Regulating Platforms. John Wiley & Sons.

Henry, N., Flynn, A., & Powell, A. (2019). Image-Based Sexual Abuse: Victims and Perpetrators. Trends and Issues in Crime and Criminal Justice, 572, 1–19.

Moore, M., & Tambini, D. (2021). Regulating Big Tech: Policy Responses to Digital Dominance. Oxford University Press.

National Crime Prevention Council. (2007, February 28). Teens and Cyberbullying.

O’Regan, C. (2018). Hate Speech Online: an (Intractable) Contemporary Challenge? Current Legal Problems.

Parker, I. (2012, January 30). The Story of a Suicide. The New Yorker.

Stevens, F., Nurse, J. R. C., & Arief, B. (2021). Cyber Stalking, Cyber Harassment, and Adult Mental Health: A Systematic Review. Cyberpsychology, Behavior, and Social Networking, 24(6), 367–376.

Vogels, E. A., & Atske, S. (2021, May 25). The State of Online Harassment. Pew Research Center: Internet, Science & Tech.

Be the first to comment

Leave a Reply