Online Poison: The Deadly Spread of Hate Speech on the Internet

The internet has evolved over time to become a crucial component of peoples’ daily lives in the current digital era. But as more and more people take advantage of the convenience and richness that the online community offers, some very serious issues have surfaced. The use of hate speech and online harm is one of the more serious and obvious problems. Numerous instances of hate speech on the Internet have come to light in recent years, showing that this problem has become urgent in society and merits more attention.

Tracing the roots of online hate speech: an introduction to hate speech and online harm

There is still no clear definition of hate speech in international human rights law, as the concept of hate speech covers many sensitive and controversial aspects (United Nations, n.d.), but hate speech is essentially a term or linguistic feature that describes speech or language based on race, ethnicity, religion, gender, sexual orientation or other personal characteristics that is intended or likely to offend, threaten or incite hatred against a specific individual or group of individuals (Flew. 2021). In more simple terms, hate speech is a way of expressing harm to others through words, and it is no less psychologically traumatic than physical harm (Sinpeng et al., 2021). Tirrell (2017) likens hate speech to deadly chronic poison and a serious threat to the health and even the life of the victim.

“Social media provides a global megaphone for hate.” -ANTÓNIO GUTERRES, United Nations Secretary-General, 2021

According to the latest global digital and social media statistics, the Internet already has about 4.74 billion active Internet users, which also accounts for more than half of the total global population (Smperth, 2022). The rise of the Internet has provided a broader platform for information dissemination and communication, but it has also provided more space and opportunities for the spread of hate speech. Flew (2021) pointed out there has been an increase in the proliferation of hate speech online In recent years, which has resulted in greater societal polarization and a breakdown in the ability to engage in meaningful dialogue. Compared to traditional media, the Internet is characterized by the speed and scope of information dissemination, low delivery costs and high interactivity (Ünver, 2018), and these characteristics have contributed to the rapid spread of hate speech. What is more important is that hate speech reaches global and diverse audiences in real time and the relatively persistent nature of its content allows it to re-emerge and re-populate with time (United Nations, n.d.). In addition, the anonymity of the internet and the nature of freedom of expression make it easier for hate speech to occur and spread on the internet, especially on social media platforms and gaming communities where regulation is lacking. The lack of effective regulation and control makes it easier for hate speech and online harm to spread among young people and adolescents.

Online hate speech is like deadly chronic poison: an insight into the dangers of online hate speech

Hate speech on the internet could be more damaging to people than traditional offline hate speech. The main reason is that hate speech on the internet is more difficult to avoid. Online speech could remain on the internet and cannot be easily deleted or eliminated. Even if it is deleted, it could be redistributed by taking screenshots or sharing it, thus further compounding the suffering of the victim (UNESCO, 2021). Secondly, due to the anonymity of the internet, many people are going to make offensive statements without being held accountable and it is difficult to be held accountable (Catherine & Stefan, 2020). This also adds to the suffering of victims as they are unable to identify who made the offensive statements or why they were motivated to do so. Finally, hate speech on the internet may trigger a group effect. Through social media platforms, one person’s offensive comments can trigger others to follow suit, thereby creating a group effect (Zachary, 2019). This might lead to more people joining the attack and further aggravating the suffering and harm of the victim.  

In recent years, numerous catastrophic news stories have been attributed to online hate speech, transforming the internet and social media platforms into breeding grounds for hate speech and its destructive effects. In fact, studies conducted as early as 2012 indicate that the Internet and social media can influence suicidal behaviors and that cyberbullying victims are nearly twice as likely to attempt suicide as those who are not tormented (Luxton, 2012). Nonetheless, the increase in Internet users, the complexity of the political environment, and the escalation of social tensions have exacerbated this phenomenon in the present day.

In 2019, the incident of Choi Jin-ri committing suicide due to hate speech on the Internet caused a global sensation and attracted widespread attention from all walks of life. This is also one of the important triggers for promoting social media to strengthen the supervision of hate speech and improve related laws.

Although Choi Jin-ri was highly sought after for his fashion style and image during his lifetime, he also became a victim in the Korean fandom. She is often attacked by hate speech online, including attacks on her body, appearance, personality, and private life. These accusations not only caused damage to her reputation, but also caused serious harm to her mental health.

“I could have been scared. But I wasn’t because I thought it would be nice if more people could discard their prejudices.”- Choi Jin-ri.

Choi Jin-ri publicly expressed her dislike of online hate speech and cyberbullying many times during her lifetime, but she did not receive enough protection and support. This kind of cyberbullying and hate speech not only caused great harm to herself, but also caused serious harm to the whole society. What’s more, these online hate speeches have also contributed to the hatred and confrontation among fan groups to a certain extent, which has brought a bad influence on the entire entertainment industry.

The incident led to the introduction of the “Sulli Act” in the shortest possible time that year, which was used as a basis for improving and amending some laws on online speech (The Korea Times, 2019). However, this has had only a minimal effect on online hate speech. As Flew (2021) argues, although many countries have begun to introduce laws that restrict hate speech, objectively these restrictions run counter to the relationship between freedom of expression and social stability. In general, many factors make hate speech on the Internet a very difficult problem (Catherine & Stefan, 2020), but the extremely serious negative impact of online hate speech makes this problem need to be improved as soon as possible.

Facebook’s battle with hate speech: a case study on hate speech regulation

According to statistics, there are nearly 3.19 billion users all over the world who are active in social media platforms and talk or interact with each other by generating and sharing content. There is no doubt that platform companies will play an important role in addressing hate speech. Flew (2021) and Sinpeng (2021) have the same view on this. They also believe that the platform should strengthen content review and take effective measures to prevent the spread of hate speech, while complying with local laws and regulations.

(Actioned hate speech content items on Facebook worldwide from 4th quarter 2017 to 4th quarter 2022)

As the social media platform market leader, Facebook is the first social network with more than 1 billion registered accounts and currently has more than 2.9 billion monthly active users (Statista, 2023). The fact that the service covers most countries around the world also makes the issue of hate speech on the platform even more complex and serious. Facebook aims to give its users the best possible experience and the ability to enjoy freedom of expression in all languages, across borders and cultural barriers. Therefore, Facebook has also been taking aggressive steps toward policing hate speech.

Facebook’s censorship team monitors and reviews the content posted on the platform by using advanced artificial intelligence technology and human review techniques. To better respond to more subtle forms of hate speech (such as sarcasm, slang, hate symbols, images, etc.), Facebook has also deployed a new reinforcement learning framework called Reinforced Integrity Optimizer (RIO) on top of its original AI (Meta AI, 2020). In addition, Facebook has a dedicated team of content reviewers who review content that may violate the platform’s policies and remove it as soon as possible. In the fourth quarter of 2022 Facebook removed over 11 million pieces of hate speech content (Statista, 2023) to protect it’s users from verbal attacks and hate speech.

This is despite the fact that Facebook’s artificial intelligence system is more advanced and can help its team to censor and regulate hate speech more efficiently. There are still a number of users who have indicated that they have experienced Facebook censoring normal speech by mistake, and Facebook spokesman Andy Stone has stated that almost all hate speech deleted was discovered by artificial intelligence before users reported it (Deepa, 2021). However, decisions about content moderation are rarely clear-cut, but instead exist in a complex web of considerations (Roberts, 2019). Artificial intelligence systems are not capable of understanding language and cultural context in the same way as humans, which obviously makes it easy to misjudge statements that are not malicious as hate speech.

On the one hand, Facebook’s community standards act as a beacon for the media platform’s community, aiming to ensure that users on the platform behave ethically and legally as well as provide a safe social media experience. The development of these standards has been made more universal and authoritative by taking into account feedback from users and advice from experts in various fields (Meta, n.d.). In addition, community standards also play the role of education and publicity, helping users better understand and abide by community rules by providing relevant resources and information (Brett, 2023).

On the other hand, the potentially subjective and opaque nature of Facebook community standards may lead to different interpretations and implementation by different content reviewers. Unfair and inconsistent content review will undoubtedly result in unfairness and frustration for users. What’s more, community standards may be influenced by political, commercial or other interests that lead to over-censorship or tolerance of certain content, thereby affecting freedom of expression.

Facebook is passionate about collaborating with organizations and governments to tackle hate speech, and is actively involved in international organizations and forums, working with policymakers, academics and activists to discuss and develop better strategies and approaches to combat hate speech on all fronts (Meta, 2020).

However, according to the survey by Sinpeng et al (2021), it is difficult for Facebook to find suitable partners in authoritarian countries or countries with highly polarized politics. As a result, not only is it more difficult for Facebook to monitor hate speech in these regions, but it may also frustrate users and even lose market share in these regions due to the difficulty of controlling hate speech in these regions.

Overall, Facebook has been working to combat hate speech and has taken a variety of steps to address it. Although this issue still exists, Facebook will continue to work with other relevant parties and continuously improve its governance mechanism to protect users from verbal attacks and hate speech, while also maintaining the fairness, diversity and inclusion of the platform.

Navigating the Choppy Waters Ahead: future perspectives on the regulation of online hate speech

Smarter artificial intelligence systems, a relatively well-developed legal system and increased awareness among internet users have undoubtedly supported media platforms to a greater extent than before in regulating online hate speech. However, the complex and diverse nature of online hate speech makes it difficult for social media platforms to achieve effective results in addressing the issue. As the Anti-Defamation League has calculated, the number of people suffering from online hate in the US in 2022 will be almost equal compared to the previous two years (ADL, 2022). Most importantly, today’s more complex international relations and growing social and class conflicts are likely to make the future of online hate speech even more problematic and damaging. Regulating hate speech on media platforms will be an even greater challenge in the future. This blog will also provide some thoughtful suggestions to help media platforms better meet the difficult challenges ahead.

First of all, there should be more interplay between AI censorship and human censorship in order to regulate hate speech more accurately. More efficient and learning artificial intelligence makes many media platforms currently too reliant on automated censorship. In the case of Facebook, the platform’s artificial intelligence system has replaced manual teams in many aspects of the review. This has also led to a reduction in the number of employees on the team, a decline in the quality of manual reviews and a high incidence of missed reviews (Deepa, 2021). On the other side of the coin, there are many aspects of artificial intelligence that could not be replaced by human review, such as some comments that are mistakenly deleted will not only reduce the user experience, but also reduce the diversity of the platform content. As a result, AI censorship systems working in tandem with human censorship and each playing to their unique strengths will allow for more accurate regulation of hate speech.

Furthermore, the online environment and the awareness of online users of hate speech is an aspect that cannot be ignored. Smarter technology, more scientific regulation or better laws can only serve to limit hate speech on the internet, but it is the users who are the purveyors of it that are the key to eradicating hate speech. Media platforms should place more emphasis on user awareness and education and actively engage in campaigns to educate about hate speech on the internet in order to create a positive online environment. And a good network environment will also make users more conscious to maintain this good atmosphere. In short, the efforts of media platforms and the cooperation of Internet users are indispensable for achieving harmony and stability in cyberspace.


ADL. (2022). Online Hate and Harassment: The American Experience 2021. Retrieved from:

Brett Helling. (2023). Facebook Community Standards: How They Work In 2023. Retrieved from:

Catherine. O & Stefan Theil. (2020). Hate speech regulation on social media: An intractable contemporary challenge. Retrieved from:

Deepa S, Jeff H, Justin S. (2021). Facebook Says AI Will Clean Up the Platform. Its Own Engineers Have Doubts. Retrieved from:

Flew, Terry (2021) Regulating Platforms. Cambridge: Polity, pp. 91-96.

Laura Snapes. (2019). Sulli, K-pop star and actor, found dead aged 25. Retrieved from:

Luxton, D. D., June, J. D., & Fairall, J. M. (2012). Social media and suicide: a public health perspective. American journal of public health, 102 Suppl 2(Suppl 2), S195–S200.

Meta AI. (2020). How AI is getting better at detecting hate speech. Retrieved from:

Meta. (n.d). Facebook Community Standards. Retrieved from:

Meta. (2020). Sharing Our Actions on Stopping Hate. Retrieved from:

Roberts, Sarah T. (2019) Behind the Screen: Content Moderation in the Shadows of Social Media. New Haven, CT: Yale University Press, pp. 33-72.

Sinpeng, A., Martin, F., Gelber, K., & Shields, K. (2021, July 5). Facebook: Regulating hate speech in the Asia Pacific. Final Report to Facebook under the auspices of its Content Policy Research on Social Media Platforms Award. Dept of Media and Communication, University of Sydney and School of Political Science and International Studies, University of Queensland. to an external site.

Smperth. (2022). Social Media Statistics for 2023 // Facts & Figures. Retrieved from:

Statista. (2023). Actioned hate speech content items on Facebook worldwide from 4th quarter 2017 to 4th quarter 2022. Retrieved from:

Statista. (2023). Most popular social networks worldwide as of January 2023, ranked by number of monthly active users. Retrieved from:

The Korea Times. (2019). After K-pop death, ‘Sulli’s Law’ being considered to fight cyberbullying. Retrieved from:

Tirrell Lynne (2017). “Toxic Speech: Toward an Epidemiology of Discursive Harm.” Philosophical Topics 45, no. 2 (Fall): 139-161.

UNESCO. (2021). Addressing hate speech on social media: contemporary challenges. Retrieved from:

United Nations. (n.d.). Understanding hate speech. Retrieved from:

Ünver, H. (2018). Global Networking, Communication and Culture: Conflict or Convergence? Spread of ICT, Internet Governance, Superorganism Humanity and Global Culture (1st ed. 2018.). Springer International Publishing.

Zachary Laub. (2019). Hate Speech on Social Media: Global Comparisons. Retrieved from:

Be the first to comment

Leave a Reply