The analysis of hate speech on social media


Human beings are living in an era of platformization where both society and the economy are largely shaped and dominated by digital platforms. These platforms such as Facebook, Amazon, TikTok, Google, and Alibaba are not just technology products that people can choose from, but they are also market leaders. Due to the extremely rapid development of technology, the mainstream platforms have changed the way of communication unlike 20 years ago by constantly optimizing and upgrading their technology. Today’s communication methods have become many-to-many with strong interaction, engagement, and communication rates. This is a good thing in a modern society that demands efficiency, but every coin has two sides. The dominance of digital platforms in society can also lead to the proliferation of hostile and violent hate speech on the basis of race, color, religion, nationality, and gender, and increase its harmful effects. This blog post will introduce the development of hate speech on the internet, use Douyin as a case study, explain how challenging it is for platforms to combat hate speech, and give some advice that may help platforms combat and regulate hate speech.

Development of online hate speech

Hate speech is characterized as offensive speech, such as racial, religious, or gender-specific speech against a group or individual, that may threaten the peace of society (United Nations, 2023). In today’s digital age, the use of digital media platforms has become very common, and it serves as an inexpensive medium of communication that allows every user to easily reach millions of other users. The development of social media systems that provide space for speech that is harmful to certain groups of people also poses a challenge that manifests itself in many ways, including bullying, offensive content, and hate speech (Mondal et al., 2017). The incidence of online hate speech is rising rapidly all over the globe, and it is associated with misinformation and extremist political material, especially in countries in political turmoil, countries with a history of racism, religion, and sexism, and countries with mass migrations due to war, famine, political persecution, and poverty (Singer & Brooking, 2018). 

Leaving aside social media, hate speech against, for example, a particular race or religion has always existed in real life. It is just that social media platforms amplify the impact that these comments can have. Moreover, the authorities in many countries today are rapidly recognizing that hate speech is a serious problem, especially since it is difficult to establish barriers on the Internet to prevent hatred from spreading between countries or minority groups (Mondal et al., 2017). Although social media platforms are committed to improving their monitoring technology and the accuracy of their algorithms, and authorities in many countries have introduced laws and regulations to try to control and curb hate speech online, the results have been minimal. 

A case study of Cyber bulling on Douyin

Have you ever experienced cyber violence on social media? 

This problem has become serious in China and a whopping 38 percent of people have experienced cyber violence on Chinese social media platforms, which is Douyin (Thomala, 2023). Douyin is the Chinese version of TikTok, which also functions as a short-video-based social media software. Thomala (2024) points out, “TikTok’s sister app Douyin has amassed 752 million monthly active users in China in November 2023”. This is proof that the platform has a huge presence in China and its users. Unlike the common perception that marginalized people are most often subjected to cyber violence, many small and unexpected things on Douyin can be triggers for people to commit cyber violence against others. For example, the news of Online trolls are taking a toll in China, this case is about a 23-year-old girl who posted a group photo on a platform where she visited her grandfather with her postgraduate admission letter (Ng, 2023). But people’s attention was not on the fact that this brilliant girl got into her dream college with her hard work. They were focusing on the fact that the girl had dyed her hair pink, and just because of her pink hair, people started rumors, insults, and taunts about her. The girl did defend herself, but eventually, she suffered from depression due to the online violence and on January 23, 2023, she chose to end her life by committing suicide. Not coincidentally, this type of incident is a regular occurrence in Douyin, as seen in Liu Xuezhou: Outrage over death of ‘twice abandoned’ China teen, a teenager who had hopefully sought out his biological parents who had abandoned him earlier in his life on the internet but was traumatized after being rejected by his parents again (BBC News, 2022). Unfortunately, netizens attacked him based on false statements made by his parents, and the boy chose to take his own life under the pressure of both cyber violence and depression. 

Thus, these two cases from Douyin reveal that the targeting of hate speech by users on online media platforms is no longer limited to specific groups or events. Instead, it extends to a wide range of areas, from personal encounters to societal issues, are can be the focus of attacks. It indicates that the scope and impact of cyberviolence is growing and becoming more diverse. This is because the anonymity of social media provides users with an avenue to express themselves without revealing their true identities, which has prompted some people to become bolder in spreading hate speech and amplifying human weaknesses. Second, the platforms’ algorithms tend to create an echo chamber effect, which limits the exchange of different viewpoints, contributes to groups of users with similar views, and reinforces their shared narratives and ideas (Cinelli et al., 2021). Social media uses algorithms to gather information and preferences from users and then push content that they may be interested in, resulting in users tending to be exposed only to information that aligns with their viewpoints, which reinforces biased and discriminatory perceptions of particular groups or events. These social media features can increase the challenge for platforms and governments to regulate online hate speech.

Challenge of regulate online hate speech

The difficulties faced by social media platforms, and by individual countries, in controlling hate speech on the Internet are divided into five main points, which are, the difficulty of finding a balance between freedom of expression and the prevention of hate speech, different definitions defining hate speech, limited detection technology, the difficulty of international cooperation, and the difficulty of penalizing users. Because it is difficult to control it, and eliminate it completely, hate speech on the Internet is already a major problem that can have many tragic consequences and threaten the peace and security of societies.

First, freedom of speech is a fundamental democratic right that allows people to freely express their thoughts and opinions, no matter how unpopular or controversial those opinions may be. The ability for people to debate their views without fear or hindrance is necessary for the governance and social development of society, and freedom of speech is a cornerstone of democracy (Riemer & Peter, 2021). However, when it comes to the discussion of whether freedom of speech includes hate speech, a distinction should be made between the different categories of hate speech, and the issues of coverage and scope of protection (Yong, 2011). Therefore, the challenge of finding a balance between freedom of expression and hate speech is that governments and platforms need to protect freedom of expression and support civic engagement while preventing speech that could cause actual harm and disrupt social cohesion. Social media platforms and government agencies need to develop policies aimed at identifying and limiting hate speech without overly restricting or censoring speech that is merely an expression of opinion and criticism. In addition, because the definition of hate speech varies according to the country, culture, laws, and policies of social media platforms, it is very difficult for either governments or platforms to develop policies that will truly eradicate online hate speech. For example, in China, Chinese netizens are often sensitive to social media speech that mentions Japan due to the historical Japanese invasion of China. Thus, comments praising Japan on Chinese social media may be viewed as sensitive or potentially insulting, even if the comments are not inherently malicious. This phenomenon reflects how history and culture influence the perception and interpretation of social media content. The different ways in which hate speech is defined also make it challenging to regulate.

Furthermore, social media platforms rely on a combination of artificial intelligence, user reporting, and staff known as content reviewers to perform moderation of hate speech content (Laub, 2019). However, since hate speech posts multiply rapidly and can be posted on any platform, it is difficult and complex to eradicate them quickly. Some mainstream internet platforms have blocking technologies such as sensitive words, but language is diverse and complex, blocking sensitive words that are insulting and discriminatory can only limit some of the hate speech. However, users can still use homonyms to express their ideas. Therefore, although AI and algorithms have been used to identify and filter hate speech, these technologies still have limitations and shortcomings in recognizing complex contexts and ironic language. Lastly, all digital platforms have proven to face unique governance challenges due to their large scale and the multitude and diversity of stakeholders involved, and reliance on internal governance or corporate self-regulation alone is often insufficient to achieve the goal of maintaining a secure digital environment (Flew, 2021). Due to the nature of different Internet platforms that can be used in multiple countries. Regulatory measures in a single country are often insufficient to comprehensively address the problem of hate speech, requiring international cooperation and coordination, and joint governance by individual countries is necessary. However, since legal systems differ from one country to another around the world, the effectiveness of cooperation between countries could be affected by their political, economic, and diplomatic relations. There are also cultural and linguistic differences that affect the judgment of countries regarding hate speech. In addition, it is impossible to apply one set of regulatory policies to every social media platform, as the interests of social media platforms are diverse, and their regulatory models may differ. Therefore, it is very challenging to seek cooperation between countries or platforms in managing hate speech. Even if such cooperation could be achieved, the anonymity of the Internet makes it difficult to locate the sources and user’s location of hate speech, which makes it challenging to penalize them.

How to reduce hate speech

Although the problem of hate speech is difficult to eradicate immediately, there are some examples of effective hate speech reduction that countries and platforms can learn from. First, Germany enacted the Act to Amend the Network Enforcement Act on June 28, 2021, to combat hate speech and fake news on social networks. It requires covered social media networks to remove “manifestly unlawful” content within 24 hours of receiving a complaint from a user (Gesley, 2021). If the content is not illegal obvious, the social network has seven days to investigate and remove it, and social media networks may be fined up to €50 million ($59.2 million) for non-compliance (Gesley, 2021). In addition, the act has been criticized by others for potentially leading to excessive regulation while persecuting freedom of speech, and for being difficult to operationalize in practice. However, the act does serve as a deterrent to social media platforms that have already responded quickly to increase the sense of responsibility of the platforms. Therefore, it is worthwhile for countries to learn from it, as long as Germanymake adjustments according to different problems in the actual implementation. Second, Douyin started displaying the IP address of the user’s location in 2022. This move has been criticized by some people as compromising user privacy though. However, the fact that users know their IP is visible to everyone else will make them feel more at risk of being tracked and identified by the platform. It also reduces the impact of anonymity and the urge for users to engage in hate speech, and it is a worthy example for other platforms to learn. Finally, individual countries can raise public awareness of hate speech encouraging people to report hate speech actively. Improvements in moderation techniques by platforms are among the suggestions that could reduce hate speech on the internet.


In conclusion, this blog post has introduced the development of hate speech on the internet, used Douyin as a case study, explained the challenge that is for platforms to combat hate speech, and given some advice that may help platforms combat and regulate hate speech. Because of the difficulty of controlling it, and eliminating it, hate speech on the Internet is already a major problem that can have many tragic consequences and threaten the peace and security of societies. It is hoped that in the future, countries and social media platforms will be able to break through the dilemma of technological limitations quickly and completely eliminate hate speech on the Internet to build a better and more peaceful society and internet environment.


BBC News. (2022, January 25). Liu Xuezhou: Outrage over death of “twice abandoned” China teen. BBC News. 

Cinelli, M., Morales, G. D. F., Galeazzi, A., Quattrociocchi, W., & Starnini, M. (2021). The Echo Chamber Effect on Social Media. Proceedings of the National Academy of Sciences118(9). 

Gesley, J. (2021, July 6). Germany: Network Enforcement Act Amended to Better Fight Online Hate Speech. Library of Congress, Washington, D.C. 20540 USA.

Issues of Concern. (2021). In T. Flew, Regulating platforms (pp. 91–96). Polity.

Laub, Z. (2019, June 7). Hate Speech on Social Media: Global Comparisons. Council on Foreign Relations.

Mondal, M., Silva, L. A., & Benevenuto, F. (2017). A Measurement Study of Hate Speech in Social Media. Proceedings of the 28th ACM Conference on Hypertext and Social Media – HT ’17, 85–94. 

Ng, K. (2023, March 26). Online trolls are taking a toll in China. BBC News. 

Riemer, K., & Peter, S. (2021). Algorithmic audiencing: Why we need to rethink free speech on social media. Journal of Information Technology36(4), 026839622110133. 

Singer, P. W., & Brooking, E. T. (2018). Likewar: The Weaponization of Social Media. In Google Books. Houghton Mifflin Harcourt.

Thomala, L. L. (2023, April 5). China: Online abuse rates on major social media platforms 2022. Statista.

Thomala, L. L. (2024, February 22). China: Douyin MAUs 2022. Statista.

United Nations. (2023). What is hate speech? United Nations.

Yong, C. (2011). Does Freedom of Speech Include Hate Speech? Res Publica17(4), 385–403. 


Be the first to comment

Leave a Reply