Difficult to regulate: Hate speech is gaining ground on social media platforms

Source : Andreas Töpfer

Current hate speech develops fast on the social media platform

In the information age, everyone can become the producer, reviewer and disseminator of information more quickly and conveniently. The content review of social platforms and even macro supervision and governance strategies have become a hot topic in the current network regulation. Since the global outbreak of COVID-19, vicious incidents of racism caused by racial, ethnic, or religious reasons have increased dramatically in countries around the world (Matamoros-Fernández and Farkas, 2021). Facebook, the world’s largest social network by users, has faced a boycott from non-profit groups over allegations that it has long allowed racial discrimination, violence, and misinformation to flourish on its platform.

Figure 1 Facebook removes record number of hate speech posts (Statista.com, 2023)

In May 2016, the European Commission announced a partnership with Facebook, Twitter, YouTube and Microsoft to control illegal hate speech online, with Internet giants collectively signing a code of conduct promising to “block and remove hate speech within 24 hours of being reported” (Matamoros-Fernández and Farkas, 2021).

In 2018, regulations were tightened again, with fines of up to 50 million euros for failing to take down content within a specified time, and platforms required to report the results of their handling of illegal content every six months. The platforms have also shown a high degree of cooperation, even to the extent of risking their lives. For example, Facebook and Twitter have taken a strict attitude to the extremist rhetoric of ISIS, and for a time, terrorists were angry, and posted a threatening video on the website, showing a photo of Zuckerberg and Jack Dorsey (the founder of Twitter) being shot.

According to the report, Facebook removed about 2.9 million hate speech messages between July and September 2018, and in one fell swoop last year banned 259 accounts suspected of publishing and spreading material supporting terrorists (Matamoros-Fernández and Farkas, 2021). More than 95% of hate content on YouTube can be blocked within 24 hours thanks to machine learning. Now that the government has taken a tough stance and the platforms have taken a “life-or-death” approach, why do hate speech-induced violence still persist in recent years? And here we are going to discuss about the reason to cause the hate speech and the way to solve it.

What is the reason to cause hate speech

First, there is no authoritative international definition of “hate speech.” Therefore, different regions and countries will adopt different identification methods and standards to evaluate hate speech, leading to inadequate governance measures (Atamoros-Fernandez, 2017). For example, in the judgment of some Internet speech cases of European Human Rights Law, the court clearly defined the website operators involved as hate speech but did not punish the defamatory and insulting speech.

Source: Kyiv Ukraine, November 2022

Second, the conflict between free speech and hate speech has not been completely resolved. With the development of the Internet, especially social media platforms, freedom of speech has become a social culture, which will lead to the proliferation of a lot of false information. Many people will also take drastic remarks to attract attention in order to get attention. However, it is difficult to completely solve the problem of hate speech by Internet legislation alone (Sinpeng et al, 2021). In the past, some legislation governing the Internet in the United States was repeatedly rejected by social groups and eventually ruled unconstitutional by the courts.

Third, the popularity of social networking platforms has broken down the clear line between civil society and national government. The increasing number of users and continuous development of social network platforms make them a political force with strong communication ability (Sinpeng et al., 2021). As a result, some politicians take advantage of these social media platforms to make false statements, incite racial discrimination and xenophobia, and politicize racial discrimination or hate speech.Since the epidemic, the vicious violence against AsiAn-Americans has increased dramatically under the manipulation of US politicians. Since the outbreak of COVID-19, several US politicians have repeatedly spread the “viral discrimination theory” in live media or interviews, deliberately fabricating regional discriminatory terms such as “Chinese virus” and “Wuhan virus” to refer to the novel coronavirus. This use of public influence to falsely lead people to strengthen geographical and viral ties has not only stigmatized certain groups, but also encouraged the spread of increasingly racist and xenophobic rhetoric, drawing widespread condemnation from countries around the world.

The dilemma of hate speech governance on social network platforms

Social networking platform has developed fast in recent years, and it makes more difficult to regulate hate speech on these platforms. The anonymity and diversity of the network make a large number of hidden contents and changeable forms of hate speech spread rapidly in a transnational scope (Sinpeng et al, 2021). According to statistics, since the pandemic, vicious comments on the Internet have increased dramatically, which has also greatly increased the number of violent crimes and hate crimes. The harm caused by these hate speeches, which is spreading like a “virus”, is deeply rooted (Kovács et al, 2021).

On October 29, 2021, the University of California at Berkeley’s Institute for Otherness and Belonging released data from a survey on racial discrimination against Muslims. According to the data, 67.5 percent of the respondents have suffered from Islamophobia, and 93.7 percent said their physical and mental health has been greatly affected by Islamophobia(Mullah and Zainon, 2021). Some racists may use the invisibility of social networks to find accomplices and try to carry out attacks together to commit racist crimes. For example, in 2019, the racist massacre of US Crusius received a lot of support from users of the social networking platform 8chan after the incident (Mossie and Wang, 2020).

Hate speech is expressed in various forms, including oral, written and symbolic speech. Social networking platforms are as much a platform for the dissemination of messy and confusing information as they are for the production of information for each user, providing various forms of expression such as pictures, music, videos and memes (Castaño-Pulgarín et al, 2021).

The diverse expression forms of hate speech on the Internet directly lead to the difficulty of social network platforms to supervise and govern it greatly. The reasons are mainly as follows: First, symbolic remarks represented by emoticons usually do not directly express prejudice, discrimination or hatred, instead, they are mocking and joking, so it is difficult to characterize and regulate them. Second, such racial discrimination or hate speech is easy to arouse public opinion to intensify contradictions, and then make the speech widely spread and intensify the hidden negative social emotions of some groups (Carlson and Frazer, 2018).

Since individuals who post hate speech on the Internet are difficult to be held accountable, hate speech is characterized by anonymity. According to relevant data, as of March 2021, the global Internet penetration rate reached 65.6 percent (Mullah and Zainon, 2021). The decentralization of the Internet lowers the threshold for the public to release information on social network platforms, which leads to: First, the amount of negative comments such as racial discrimination or hate speech is huge, and the supervision will cost huge human and material resources, and often can only be carried out after the occurrence of vicious events. Second, the anonymity of the Internet makes it much harder to track down individual users who post comments. The characteristics of network ID is difficult to trace, so that the perpetrators often go unpunished. These factors give netizens a lucky mentality, that is, “as long as anonymous, they will not be responsible for the speech”. Therefore, this fluke mentality makes Internet users often do not have a strong sense of responsibility for speech as in reality, so it is easy to participate in the wave of hate speech. For example, Crusius, the murderer of the massacre in the United States on August 3, still posted his content on the social network platform after the event, which was widely discussed and widely praised by users on the platform (Castaño-Pulgarín et al, 2021).

Suggested solutions to enhance regulation on social media platforms to reduce hate speech

Source: Combating hate speech and hate crime, European Commission

In view of the complicated ways of speech regulation and governance on social network platforms and the cross-content problems, the government should establish a clearer administrative supervision system with feedback mechanism and processing mechanism (Sinpeng et al, 2021). In terms of extraterritorial experience, it can learn from the effective measures of Russia and Germany in establishing the subject responsibility. As for the regulation and governance of speech on online social platforms, various departments in Russia work together to govern. The Media and Culture Authority, for example, oversees media and communications technology. The Interior Ministry’s Special Technical Measures Bureau is responsible for receiving reports of harmful online speech.

The Federal Security Agency is responsible for monitoring social networking platforms for objectionable and national security-related information. The Interior Ministry’s Internet Monitoring Center oversees large social networking platforms such as Facebook and Twitter (Mullah and Zainon, 2021). Russia has made it more operational to regulate and govern speech on social network platforms by clearly refining the regulatory responsibilities of various institutions. The “Committee on Internet and Digital World” established by the German Bundestag is responsible for providing “innovative ideas and suggestions” on Internet information regulation, so as to absorb more effective opinions to achieve the regulation and governance of bad speech (Mondal et al, 2020). The Federal Criminal Police is responsible for tracking, viewing and analyzing major social network platforms to track and combat harmful comments, so as to strengthen the supervision of harmful comments on the Internet. Relevant administrative organs in Russia and Germany have handled a large number of cases concerning bad speech on social networks by refining the supervision content, and achieved remarkable results, which is worth learning from (Mondal et al, 2020).

In view of the single regulation and governance procedure for hate speech in social network platforms, a multi-level governance system can be established with legislative regulation as the main, policy guidance and technical support as the auxiliary. This requires adhering to the organic unity of government regulation, public participation and industry self-discipline, and exploring effective methods of speech regulation on social network platforms from multiple directions (Kovács et al, 2021). Foreign experience can learn from the management of Australia’s multi-party agreement. The Australian government has established the Communication and Media Authority, which is mainly responsible for the regulation and governance of online platforms.

At the same time, the department works with the monitoring department and links with multiple departments to form an integrated governance system, so as to accurately crack down on illegal activities from multiple angles. At the same time, the Internet industry association will put forward more reasonable governance suggestions to the government to promote the rapid development of the Internet supervision system.

Final thoughts

The rapid development of Internet social platforms not only facilitates people’s life, but also brings some hidden dangers. As for the rising racial discrimination and hate speech since the epidemic, the supervision and governance measures of social network platforms are also faced with a series of difficulties. Although countries have developed and optimized legislation on social network platforms according to different legislative environments and network characteristics.

However, due to the characteristics of anonymity, diversity and clustering of online speech, the international community still faces severe difficulties and challenges in the regulation and governance of relevant online speech. Due to the characteristics of anonymity and diversity of Internet speech, in addition to the legislative regulation with the government as the main body, elements such as industry self-discipline, public participation and technical support should be added to form a governance system led by the government and multi-party cooperation. On the one hand, we can combine the latest legislative documents of various countries and combine domestic legislative regulations with the normative governance of major social networking platforms to form a hierarchical normative system.

On the other hand, it can provide technical guarantee, guide the public to participate in curbing the spread of illegal speech in a more targeted way by combining Internet big data and AI tracking technology, try to find more diversified ways to regulate and govern online speech, and guide the healthy development of Internet environment from multiple different perspectives.

Source: Bernard Marr


Atamoros-Fernández, A. (2017). Platformed racism: the mediation and circulation of an Australian race-based controversy on Twitter, Facebook and YouTube. Information, Communication & Society, 20(6), 930–946. https://doi.org/10.1080/1369118X.2017.1293130

Carlson, B. & Frazer, R. (2018) Social Media Mob: Being Indigenous Online. Sydney: Macquarie University. https://researchers.mq.edu.au/en/publications/social-media-mob-being-indigenous-online

Castaño-Pulgarín, S. A., Suárez-Betancur, N., Vega, L. M. T., & López, H. M. H. (2021). Internet, social media and online hate speech. Systematic review. Aggression and Violent Behavior, 58, 101608.

Kovács, G., Alonso, P., & Saini, R. (2021). Challenges of hate speech detection in social media: Data scarcity, and leveraging external resources. SN Computer Science, 2, 1-15.

Mullah, N. S., & Zainon, W. M. N. W. (2021). Advances in machine learning algorithms for hate speech detection in social media: a review. IEEE Access, 9, 88364-88376.

Matamoros-Fernández, A., & Farkas, J. (2021). Racism, hate speech, and social media: A systematic review and critique. Television & New Media, 22(2), 205-224.

Mondal, M., Silva, L. A., & Benevenuto, F. (2020, July). A measurement study of hate speech in social media. In Proceedings of the 28th ACM conference on hypertext and social media (pp. 85-94).

Mossie, Z., & Wang, J. H. (2020). Vulnerable community identification using hate speech detection on social media. Information Processing & Management, 57(3), 102087.

Sinpeng, A., Martin, F., Gelber, K., & Shields, K. (2021, July 5). Facebook: Regulating hate speech in the Asia Pacific. Final Report to Facebook under the auspices of its Content Policy Research on Social Media Platforms Award. Dept of Media and Communication, University of Sydney and School of Political Science and International Studies, University of Queensland. https://r2pasiapacific.org/files/7099/2021_Facebook_hate_speech_Asia_report.pdfLinks to an external site.

Be the first to comment

Leave a Reply