
The world torn apart under hate speech
With the current global incidence of hate speech, the content filtering and speech regulation strategies of internet companies are in the spotlight. By collating the definitions, regulation and algorithmic dilemmas of hate speech in the major Western social media, this paper attempts to show the current fact that Internet companies are involved in the regulation of hate speech instead of states and governments, and to chart a future in which they may be further involved in the changing dynamics of global politics, exacerbating the friction with the international body politic. In such a context, it is worth considering whether the future of algorithmic identification is at stake in the critical global political imaginary.
Understanding ‘Hate Speech and Online Harms’: from ancient concept to global problem
The history of hate speech
The concept of Hate Speech can be traced back to the classical Greek period, when hate speech was already seen as an irrational act of violence. The Bible’s Old Testament also speaks of “hatred causing disputes” and thus “wise men should be careful what they say”. From the mid-twentieth century onwards, as the evils of World War II, such as racial hatred, gender and immigration discrimination, came to the fore, Western society became increasingly concerned with hate speech, and the state became a major actor in defining and regulating it. As Judith Butler puts it “The state acts as an active actor in producing and delimiting the range of publicly accepta- ble speech, distinguishing between what is speakable and what is un-speakable.”(Butler. 1997)
Online Hate: A Growing Global Reality
With the development of internet technology and the advent of social media, coupled with the continuing ambiguity of defining and regulating hate speech, the issue of hate speech and online harm has become increasingly prominent and is fast becoming one of the most prominent realities of the current global outbreak.
Firstly, hate speech on the Internet is hidden in the vast amount of anonymous and geographically unrestricted Internet content, which is multilingual and multiform, combining emojis, images, audio and video, etc. The content is increasingly obscure, abstract and symbolic, making it extremely difficult for computers and even AI to identify it quickly.

Secondly, the transnational nature of the internet has made it easier for individuals and organizations to escape or evade government regulation in their own countries, and as a result, a large number of websites with hate speech have appeared around the world in just a few years. According to The Simon Wiesenthal Center, as early as 2000, there were more than 2,300 ‘problematic’ hate sites around the world, including more than 500 extremist sites created by Europeans and hosted on servers in the US to evade European anti-hate laws. (Perine, 2000). Subsequently, in just over a decade, the number of websites, forums and social media accounts promoting acts of hate and terrorism skyrocketed to 30,000 (Simon Wiesenthal Center, 2014).

Finally, the global political instability of recent years, the rise of populism, social movements and the outbreak of the new pneumonia epidemic have led to a new surge in global online hatred. According to Facebook, since the beginning of 2020, online hatred has further trended extremely upwards, especially hatred related to the tensions over New Coronavirus, anti-Asian rhetoric, gender and racial discrimination, incitement to civil unrest, violence, black movements, terrorism and extreme actions, and even the events on Capitol Hill triggered by the 2020 presidential election and Trump’s departure from office, among others. Facebook statistics for the fourth quarter of 2020 show that for every 10,000 content views on Facebook, an average of seven to eight times hate speech can now be seen (Facebook, 2021).
Technology companies as actors: Practical strategies of the world’s leading social media
Facebook: Defining constant revision and addition
Facebook, the world’s largest social media platform, has been trying to define hate speech in its corporate regulations for a long time, but in reality, Facebook’s concept of hate speech has been forced to be added to and amended over a long period of time. The scope of its amendments has grown and expanded in response to new hot spots of controversy and public opinion.
For example, in 2019 Facebook’s corporate regulations stated: “We define hate speech as a direct attack on a person based on a protected characteristic – race, ethnicity, nationality, religious belief, sexual orientation, caste, gender, gender identity, and serious illness or disability. -a direct attack on a person. We also provide some protection for immigration status.” This definition contains nine characteristics, and objectively the breadth of its content already far exceeds the definitions of most countries and political organizations. But even so, the definition has been accused of being one-sided, biased and ignoring vulnerable groups, particularly as it does not include hate speech about immigrants, sparking an outcry from European countries. Eventually, in response to public pressure, Facebook modified its definition of hate speech in 2020 to include ‘certain’ protections for migrant identities and, in January 2021, extensively modified its rules to include protections for ‘refugees, migrants, and asylum seekers’.
However, these additions and extensions still do not cover all the controversies that arise in reality, and in order to avoid another crisis in public opinion, Facebook has to constantly update its regulations in response to external changes. But plans never keep up with change, and Facebook is faced with an increasingly rapid expansion of hate speech: in 2020, Facebook and Instagram detected hate speech removals were up many times year-on-year, reaching 22.5 million and 3.3 million removals respectively in Q2 2020 alone. In the second quarter of 2020 alone, 22.5 million and 3.3 million hate comments were removed from Facebook and Instagram respectively. Hate speech has become one of the most significant issues that Facebook needs to address today.

Twitter: avoiding defining ‘hate speech’ and creating a ‘Hateful Conduct’ concept
Unlike Facebook, Twitter does not define ‘hate speech’, but has created the concept of ‘Hateful Conduct’ as part of its user agreement. It defines ‘hateful conduct’ as “Users are not allowed to target, directly attack or violently threaten others on the basis of race, ethnicity, nationality, caste, sexual orientation, gender, gender identity, religious beliefs, age, disability or serious illness.” (Twitter, 2021)
Relying the concept of hate speech, Twitter argues that it is not “speech” that they are blocking, but “behaviour”. Thus offensive content is still allowed to be posted, while hate speech only constitutes ‘hate behaviour’ if it is directed at a specific user (Jeong, 2016).
Twitter’s avoidance of defining hate speech has been met with some criticism. For example, scholars have argued that its guidelines and definitions are vague and overly broad, and users have long complained that the rules are inconsistent and ineffective and do not protect relevant groups from harassment.
Twitter’s user speech regulation focuses on two main channels: user-initiated reporting and mass tagging. The system is primarily manual and is supplemented by an automatic AI system: this alerts users to potentially harmful content but leaves it up to them to choose whether or not to continue viewing it, while users can also mark content as harmful themselves. However, there has been criticism that Twitter often simply blocks the content after it has been reported. This neutrality has apparently accelerated the loss of Twitter users, especially celebrities and opinion leaders who are most exposed to verbal violence, and has directly contributed to the shrinking of Twitter’s content ecosystem. In a 2015 internal social concern forum, Dick Costolo, then CEO of Twitter, said that the company had lost many users because there were too many ‘trolls’ on its social media platform (Tiku & Newton, 2015).
YouTube and TikTok: the “irresponsible” “guilty” entities
Attitudes and policies towards online hatred are directly proportional to the business image of internet companies. Even when companies like Facebook and Twitter invest a lot of effort in regulating speech, they still face a great deal of blame, criticism and external pressure. Internet companies that are slow to respond or are not proactive in their development and management are seen as ‘irresponsible’ and ‘guilty’ entities.

YouTube, for example, has been heavily criticized for its lack of initiative in regulating expression, and has been seen to have “transformed itself from a mere video-sharing site and news and entertainment site into a platform for the dissemination of extreme views and misinformation”. Under public pressure, YouTube made more than 30 updates to its regulations in 2018 alone, then increased the number of daily hate speech cleanups 46-fold in a short period of time, shutting down a large number of channels involved in hate speech (Weight, 2020), and by 2019 its hate speech policy had listed 11 protected characteristics of people, including “veteran status”. However, YouTube has so far faced a great deal of criticism for its inability to identify and filter hate content in a timely manner because of the greater difficulty of identifying and filtering video and audio content. Researchers have also stated that a large number of hate speech cleanups is not enough to prove the extent of YouTube’s efforts on the issue, “as YouTube’s hate speech problem is not necessarily related to volume” (Martineau, 2019). One study argues that YouTube has in many ways incentivized the spread of extremist content by “monetizing the influence of all people, no matter how harmful their belief system” (Lewis, 2018). Furthermore, it remains incapable of using intelligent algorithms as its primary identification tool, relying more on users to report dangerous or abusive content, while all these reports are then subjected to manual qualitative review.
TikTok, for example, has also been criticized from across the globe for its lack of clear guidelines on hate speech norms and approaches to content regulation. On the one hand, the “persona” of pure entertainment, video-sharing, and platform-based internet business is not accepted in US society; on the other hand, TikTok, which is primarily used by teenagers, has drawn increasing criticism from academics and industry as concerns arise about the potential harm of hate speech to children (Simpson, 2019). Studies have pointed out that various terrorist groups and extremist groups are using TikTok to spread hate speech and that minors are highly vulnerable (Weimann & Masri, 2020). Due to the ineffective response to hate speech and the failure to provide adequate mechanisms to protect minors, TikTok has been imagined as a source of production and parasitic evil. For example, the platform has been frequently cited as having a “Nazi problem” (Cox, 2018) and a “white supremacist problem” (Fung, 2020), and is “a new minefield of hate speech ” (Christophe, 2019). Amidst criticism, TikTok has accelerated its efforts to regulate hate speech, applying algorithms to flag and remove harmful content, but also employing over 10,000 people worldwide to work on trust and safety, a large proportion of whom are being tasked with reviewing uploaded content on the platform (Shead, 2020).
Conclusions and reflections: Technology companies at a ‘crossroads’
It is increasingly apparent that the world is being “torn apart”. In the reality of the proliferation of hate speech, internet technology companies have replaced the state as the main actors in the regulation of hate speech, and have had to create friction with the state, advertisers, the public and many other actors.
It is a ‘crossroads’ and a ‘prisoner’s dilemma’, where internet companies are faced with the daily chore of identifying, filtering and cleansing hate speech, in addition to the struggle of developing an overall hate speech strategy. Although artificial intelligence technology based on automated language recognition has long been introduced to regulate hate speech, and its development is costing internet companies a fortune, internet technology companies have had to deal with questions about computers recognizing human hatred and the controversy over the bias implicit in the code of algorithms – algorithms that are so “imperfect”, delicate, and “fragile”, with the potential for computational errors, and often criticized for the “bias” of the algorithms themselves, are filled with the “black box” of original sin, a technological force that tolerates, parasitize, and even produces and promotes hatred in human society, making the world more and more “torn” and antagonistic. However, in the future, for reasons of cost and efficiency, the development and enhancement of algorithms will continue to be the primary means by which technology companies regulate hate speech. If technology companies are moving closer to the political arena in their efforts to regulate hate speech, then the future of algorithmic identification is at stake in the critical global political imagination. Tech companies are becoming a sort of informally identified body politic, taking control of the global flow of hate in a vast ocean of content. How will they open up a new democracy or a new dictatorship in the online world? This is a question for all to ponder.

Reference
Butler, J. (1997). Sovereign performatives in the contemporary scene of utterance. Critical Inquiry, 23(2), 350-377.
Cohen-Almagor, R. (2017). JS Mill’s boundaries of freedom of expression: A critique. Philosophy, 92(4), 565-596.
Cooper announce the Simon Wiesenthal Center’s report on digital terrorism and hate. https://www.wiesenthal.com/about/news/district-attorney-vance-and.html
Cox, J. (2018, December 19). TikTok has a Nazi problem. Vice. https://www.vice.com/en/article/yw74gy/tiktok-neo-nazis-white-supremacy
Fung, B. (2020, August 14). Even TikTok has a white supremacy problem. CNN. https://edition.cnn.com/2020/08/14/tech/tiktok-white-supremacists/index.html
Jeong, S. (2016, January 14). The history of Twitter’s rules. Vice. https://www.vice.com/en/article/z43xw3/the-history-of-twitters-rules
Lewis, R. (2018, September 18). Alternative influence: Broadcasting the reactionary right on YouTube. Data & Society. https://datasociety.net/library/alternative-influence/
Martineau, P. (2019, September 4). YouTube removes more videos but still misses a lot of hate. Wired. https://www.wired.com/story/youtube-removes-videos-misses-hate/
Nilesh, C. (2019, August 12). TikTok is fuelling India’s deadly hate speech epidemic. Wired. https://www.wired.co.uk/article/tiktok-india-hate-speech-caste
Perine, K. (2000, July 25). The trouble with regulating hatred online. CNN. http://www.cnn.com/2000/TECH/computing/07/25/regulating.hatred.idg/index.html
Shead, S. (2020, October 21). TikTok plans to do more to tackle hateful content after reports say it has a “Nazi problem”. CNBC. https://www.cnbc.com/2020/10/21/tiktok-says-it- plans-to-do-more-to-tackle-hateful-content.html
Simon Wiesenthal Center. (2014, May 1). District Attorney Vance and Rabbi Abraham
Facebook. (2021, February 11). Community standards enforcement report. https://transparency.facebook.com/community-standards-enforcement#hate-speech
Tiku, N., & Newton, C. (2015, February 4). Twitter CEO: “We suck at dealing with abuse”. The Verge. https://www.theverge.com/2015/2/4/7982099/twitter-ceo-sent-memo- taking-personal-responsibility-for-the
Twitter. (2021, March 17). The Twitter rules. https://help.twitter.com/en/rules-and- policies/twitter-rules
Weight, J. (2020, December 3). Updates on our efforts to make YouTube a more inclusive platform. YouTube. https://blog.youtube/news-and-events/make-youtube-more- inclusive-platform/
Weimann, G., & Masri, N. (2020). Research note: Spreading hate on TikTok. Studies in Conflict & Terrorism, 1-14.
Simpson, R. M. (2019). “Won’t Somebody Please Think of the Children?” Hate Speech, Harm, and Childhood. Law and Philosophy, 38(1), 79-108.
Be the first to comment