How hate speech harms the target group and why it is difficult to regulate——take Hailey Bieber’s social media attack by hate speech as an example.


This article will focus on the topic of hate speech. Firstly, the concept of hate speech will be introduced and explained why it is so easy to spread and propagate on contemporary social media platforms. Then, the article will cite the recent controversy between Hailey Bieber and Selena Gomez on Instagram and TikTok to illustrate the harm of hate speech and how individuals who are targeted in such situations can feel powerless. Finally, the article will examine the external and internal factors that make hate speech difficult to control.

What is hate speech?

The advent of the Internet has made it possible for people to interact more (Flew, 2021), while at the same time communication can be unconstrained by space. According to Statista (2023), as of 2023, the worldwide number of internet users has surpassed 5.16 billion, with a staggering 4.76 billion individuals utilizing social media. As the possibilities for interaction increase, it also raises many questions. For example, the anonymity and openness of the Internet has facilitated the unrestricted ability of many users to express their opinions or post comments about others on the web (Wang et al., 2018). This is intended to be a positive development that encourages the expression of free speech. However, due to the ease of accessing diverse perspectives on the internet, individuals may encounter viewpoints that do not align with or directly contradict their own beliefs. If people are unable to tolerate such differences or do not limit their intolerance, hatred will easily manifest in their subsequent words and actions. Even though freedom of speech is a very important part of human development and progress, hate speech fosters hostility and mistrust in society (Flew, 2021).

Hate speech is defined as the expression, encouragement or incitement of hatred against a person or a group of people. It implicitly or explicitly associates a set of bad qualities or behaviors with the target, which results in the group or individual being widely perceived as an undesirable being (Parekh as cited in Flew, 2021). Based on this rationale, acts of aggression and animosity towards a particular group are viewed as acceptable, as the group itself is deemed to be in the wrong, and thus their statements and behavior warrant criticism. This in turn leads to more hate speech against them. In addition, hate speech can also be generated in an inadvertent and informal process. For example, when someone likes or echoes a hate speech comment, or shares it with others (Sinpeng et al., 2021). Structural inequalities are on display in this process, where discrimination and alienation are seen as normal and any vocalization by the target of the attack is ignored. If they dare to retort, even more serious hate speech is elicited.

Individuals or groups who are the targets of hate speech are deprived of their right to exist, and the legitimacy of their voices is dismissed. As hate speech spreads, it also engenders rumors about those targeted, which may further inflame emotions. When these rumors come to the attention of individuals, they may react with heightened emotional responses and post additional accusations and abusive content online (Wang et al., 2018). Unlike face-to-face communication, hate speech on social networks is more likely to disseminate and proliferate. There are two primary challenges to communication online. Firstly, individuals may struggle to convey emotions effectively as non-verbal cues such as facial expressions and body language are absent. This can lead to misunderstandings and an increase in negative emotions such as anger. Secondly, due to the physical distance between individuals online, some may feel emboldened to attack those they do not know or use the platform as an outlet to express their emotions. Especially when there are tens of thousands of comments under a post, an abusive comment will be easily drowned in it. This phenomenon is so prevalent that a survey of mainstream media and social media sites showed that 25% of comments contained mocking and insulting language (Anderson et al., 2018).

So how does hate speech spread in contemporary social media and how much harm does it cause to the targeted group or individual? This article will introduce the recent Instagram drama between Hailey Bieber and Selena Gomez to elaborate.

Case study: Hailey Bieber was attacked by hate speech on social media.

It started when Selena posted photos of her vacation on social media and received some body-shaming comments. After that, Hailey immediately posted a video with the lyrics in the background music, “…saying she deserves it, but I’m saying god’s timing is always right.” This made audiences think she was being sarcastic, so Hailey quickly deleted the video and gave an explanation, and Selena chimed in with a friendly attitude. Shortly after this incident, Hailey’s friend posted an Instagram snap with similar content to Selena’s “raised eyebrows” photo, which again made netizens think Hailey was being sarcastic, although both explained afterwards. However, an increasing number of individuals became involved and began sharing videos of Hailey seemingly making sarcastic remarks towards Selena’s friends from a few years back, resulting in an unwarranted onslaught of attacks on Hailey. The vitriol aimed at her was overwhelming, inundating social media platforms such as Instagram and TikTok with hate speech. It was not until Selena posted a picture mentioning that Hailey had received death threats and calling on the internet to stop the hate speech. Only then was the drama stopped (Murray, 2023).

The relationship between Selena and Hailey has been under constant scrutiny and speculation by netizens and the media, as Justin Bieber had been in a relationship with Selena for many years, and he married Hailey finally. But before this incident, you can see Hailey’s comment section almost rarely see the netizens for no reason to abuse, basically praise or blessing. But even a small mistake can trigger a tsunami of hate speech on social media. At this point, it seems that anyone can position themselves on the moral high ground to criticize her, and the level of tolerance for her words and actions among netizens has abruptly plummeted to zero. After this, no matter what Hailey posted, the comments no longer had anything to do with what she said, she was vocal, but it was like she was muted. For example, when she later posted a photo with Justin Bieber, there were comments like “Justin, are you being kidnapped by her”, which were highly liked and displayed at the top of the comments section.

How heinous a crime did a person commit that she would receive a death threat, and did the netizens’ anger really only stem from the incident itself? Or did she really make a mistake that was so unforgivable? Not really. Hate speech had nothing to do with the incident itself at this late stage, it was just a reason, a trigger. With this reason, all the hate speech has become rationalized. At this point, not only ordinary people are irrational, but also some internet celebrities and influencers have joined the “war” and started to “take sides”. For example, a beauty blogger jeffreestar, who has 15.9 million followers on YouTube, posted a video. In the video, he showed his support for Selena by trolling Hailey’s beauty brand products from the inside out and throwing them in the trash. He managed to attract a wave of followers on YouTube, and even received a lot of praise for his “genuine character” and “acting with integrity”. He thought he was on the side of justice at this time, but in fact he had already been on the side of violence.

Hate speech can be triggered with ease, yet its occurrence is often unpredictable. Online users may appear to get along smoothly when there are no opposing opinions or positions, but tensions can quickly arise once differences are introduced. Those who hold the power of speech will easily hurt those who are “silenced”. This is not just unique to one social media outlet. On Reddit, for instance, white males who identify with geek culture can freely insult and harm women without restraint. Because administrators do not want to alienate any customer, no matter how big the problem, banning and limiting user behaviour means less traffic (Massanari, 2017).

Challenges of regulating hate speech

If hate speech is so harmful, why not regulate it thoroughly? The challenge is how to balance the regulation of hate speech with the promotion of free speech. This has been a recurring dilemma throughout the history of the Internet as well (Flew, 2021). The first problem is the different cultural environments and social contexts, as well as the different legal frameworks and institutions around the world. The control on the internet is often operated remotely, and in this process some obstacles or ambiguity may be encountered. It is undeniable that hate speech is heavily influenced by the local context, making it imperative for page administrators to possess a deep understanding of the local culture and knowledge in order to effectively tackle the issue (Sinpeng et al., 2021). But this is another problem.

Given that the majority of social media content uploaded by users requires manual review, often done through a combination of human and machine evaluation, the responsibility of commercial content review is typically delegated to workers who are underpaid and hold low social status. Many employees are not trained prior to performing this work and do not even understand what the job entails (Roberts, 2019). This leads to a high likelihood of gaps in the management of hate speech. On top of that, on some high-traffic websites, users submit a staggering number of reports or feedback, which is far beyond the capacity of algorithms or software to handle (Roberts, 2019). It is also likely that the employees responsible for this job do not have the necessary high level of cognitive and cultural competence at all.

Furthermore, the exact definition of hate speech varies from platform to platform, and the rules surrounding it can be opaque and difficult to decipher. Specifically, Users and even professionals are generally unaware of the standards and processes utilized in reviewing social media content. Facebook explains that the reason for keeping specific rules secret is the fear that some users will use this full disclosure to exploit and play with the rules (Roberts, 2019). While certain websites and platforms may heavily regulate racially or sexist charged hate speech, there is no clear standard in place for removing negative comments, such as those directed towards Hailey. Although Sinpeng et al. (2021) suggest that more work needs to be done on consistent policies for hate speech, detection and timely removal remain challenging for social media in a dynamic speech environment.

The last but the most important reason. From a business perspective, the ultimate goal of both social networks and recommender systems is to maximize revenue (Musco et al., 2018). This means that they are likely to recommend only the content that users love to see. As in the case mentioned above, jeffreestar’s video is targeted and pushed to users who embrace Selena’s exclusion of Hailey, a process that further exacerbates the pernicious effects of hate speech. Especially for mainstream media outlets that rely on user engagement, their rules are permissive for all content uploaded by users without pre-screening (Roberts, 2019). In addition to this, when comments are sorted by default, those with a high number of likes and replies are listed higher than others, which can be misleading and biased to users who see them (Massanari, 2017), and many opinions can converge, eventually evolving into the extreme case where most users hold the same opinion (Lee, 2006). All these features and designs implicitly promote the further negative effects of hate speech. This shows that social network are crucial to mitigate or promote hate speech (de Arruda, 2022).


The hidden intent behind hate speech is to demean individuals or groups, and it greatly contributes to an environment of discrimination, prejudice, intolerance, and hostility that can even evolve into a form of violence. What is certain is that the absolutism of free speech needs to be drastically modified (Flew, 2021). Despite the many challenges, governments, societies, social media platforms, and even individuals should do more to counteract hate speech.

(words count: 2034)

Reference list

Anderson, A. A., Yeo, S. K., Brossard, D., Scheufele, D. A., & Xenos, M. A. (2018). Toxic talk: How online incivility can undermine perceptions of media. International Journal of Public Opinion Research, 30(1), 156-168.

de Arruda, H. F., Cardoso, F. M., de Arruda, G. F., Hernández, A. R., da Fontoura Costa, L., & Moreno, Y. (2022). Modelling how social network algorithms can influence opinion polarization. Information Sciences, 588, 265-278.

Flew, T. (2021). Regulating platforms. John Wiley & Sons.  

Massanari, A. (2017). #Gamergate and The Fappening/ How Reddit’s algorithm, governance, and culture support toxic technocultures. New Media & Society, 19(3), 329–346. https///

Murray, C. (2023, March 28). Is The Selena Gomez And Hailey Bieber Social Media Drama Over? The Whole Controversy Explained. Forbes.

Musco, C., Musco, C., & Tsourakakis, C. E. (2018). Minimizing Polarization and Disagreement in Social Networks. Proceedings of the 2018 World Wide Web Conference on World Wide Web – WWW ’18. https///

Roberts, S. T. (2019). Behind the screen / Content moderation in the shadows of social media. Yale University Press.

Sinpeng, A., Martin, F., Gelber, K., & Shields, K. (2021, July 5). Facebook/ Regulating hate speech in the Asia Pacific. Final Report to Facebook under the auspices of its Content Policy Research on Social Media Platforms Award.

Statista. (2023, April 3). Global digital population 2022. Statista;

Lee, E.-J. (2006). Deindividuation Effects on Group Polarization in Computer-Mediated Communication: The Role of Group Identification, Public-Self-Awareness, and Perceived Argument Quality. Journal of Communication, 57(2), 385–403.

Wang, Q., Yang, X., & Xi, W. (2018). Effects of group arguments on rumor belief and transmission in online communities/ An information cascade and group polarization perspective. Information & Management, 55(4), 441-449.

Be the first to comment

Leave a Reply