Paradise Lost: Social Media Becomes a Hotbed of Hate

#AI #AI ETHICS #Algorithmic bias #algorithms #Chatbots #datafication #DataPrivacy #DigitalRights #digital rights#data privacy #facebook #FREE SPEECH #Google #HATE SPEECH #Healthcare #LGBTQ #moderation #Online Harms #OnlineThreats #Platform Governance #Privacy breaches #PrivacyLaw #PrivacySecurity #racist speech #Social Media Community #Twitter AI AI Ethics algorithms ARIN6902 data collection decision-making Deepfake FRT Google governance of platform Hate Speech Internet LAW privacy privacy imbalance privacy policy rights social media social media platforms user data

With the development of information technology, the world has entered the era of “we media” that everyone can become a producer and disseminator of news. Social apps and video sharing platforms have been integrated into social life, the unprecedented speed of information transmission has made insulting, defamatory, offensive and privacy-infringing words, images and videos extremely harmful, resulting in the emergence of hate speech and online harms. The regulation of such violence concerns everyone’s interest and is also the focus of attention of the whole society.

Although many social media companies, such as Facebook and Twitter, have put in place policies to curb the proliferation of hate speech. However, the governance of hate speech remains controversial, with many arguing that it largely limits users’ freedom of speech and goes against the original vision of the Internet, while others believe that better regulations on hate speech are necessary to protect people’s personal safety.

How should we define the boundaries of hate speech? What effective measures can platforms taken to deal with hate speech? And more importantly, how to strike a balance between handling hate speech and ensuring free speech?

“Hate speech has been defined as speech that ‘expresses, encourages, stirs up, or incites hatred against a group of individuals distinguished by a particular feature or set of features such as race, ethnicity, gender, religion, nationality, and sexual orientation”

Parekh, 2012

Countries and their attitudes

The legal regulation of hate speech can be dates back to the aftermath of World War II. Article 20, paragraph 2, of the International Covenant on Civil and Political Rights (General Assembly resolution 2200A (XXI), 1966): “Any advocacy of national, racial or religious hatred that constitutes incitement to discrimination, hostility or violence shall be prohibited by law.” To this day, many European and Asian countries have published laws regulating hate speech. However, different countries have different options towards it. The United States and Europe are considered to represent two completely different models: In general, the United States tends to indulge in hate speech, while Europe actively resists it. This is because the core legal principle of the First Amendment is content neutrality, meaning that the government can not restrict speech based on content. Although the United States Supreme Court has established several types of “unprotected speech,” hate speech is not included in the list, and still constitutionally protected under the traditional analytic framework of free speech in the U.S.

People’s different perceptions of hate speech also have a significant impact. In a survey on “tolerance of hate speech” (Cato at Liberty, 2017), 82% of Americans stated that “people find it difficult to reach a consensus on the definition of hate speech”, leading to different expectations for hate speech governance and the inability to reach consensus on standards.

How social media has become a hotbed of hate speech

Since government agencies around the world have taken such a tough stance in this topic, it seems that private companies have to show their “sincerity” in governing hate speech, and it is up to them to take stronger steps to regulate speech on online platforms.

In 2016, the European Union issued a code of conduct to combat illegal online hate speech, and signed a commitment with large enterprises such as Facebook, Youtube, Twitter and Microsoft to “block and remove hate speech within 24 hours after receiving reports”. In 2018, regulations were tightened again, with fines of up to 50 million euros for failing to take down content within a specified time, and platforms required to report the results of handling illegal content every six months.

Read more: Facebook, YouTube, Twitter and Microsoft sign EU hate speech code

Unfortunately, the execution results are far apart. Research shows that a large number of hate speech still escapes the platform’s filters, and even the administrators who delete hate speech content are not formally trained:

… Mardi Gras’s page administrator had not had training in how to moderate a Facebook page or group, and was not familiar with Australia’s hate speech laws.

Sinpeng, A., Martin, F., Gelber, K., & Shields, K. 2021

They also follow the principles of the First Amendment and accept various speech of a discriminatory nature. Twitter’s CEO, Jack Dorsey has said, “If we (Twitter) succumb and simply react to in to outside pressure, rather than straightforward principles we enforce (and evolve) impartially regardless of political viewpoints, we become a service that’s constructed by our personal views that can swing in any direction. That’s not us.” (Eleanor, R. 2018) This concept makes the phenomenon of race-directed hate speech even more serious, and has becoming the source of many global malignant terrorist events.

Since the outbreak of the COVID-19 epidemic, hate speech against Asians continually spreading on social media. Asian Americans and international students locking themselves in their homes for fear of both physical and verbally attacks fuelled not by infection but by racial hatred. On 31 March, 2020, an account named “antiasiansclubnyc” posted two messages of hatred against Asians on Instagram, the first one read, “Tomorrow, my guys and I will take the (language) guns and shoot at every Asian we meet in Chinatown, that’s the only way we can destroy the epidemic of coronavirus in NYC!” The account later posted another message justifying its retaliation: “Please do not take this (the shooting) negatively! We are trying to save our planet!” (Nextshark, 2020)

Although the platform Instagram has deleted the post and completely banned the account that posted the hate speech immediately, the spread of the two posts on social media still caused a large number of panic among Asian Americans. After receiving the report, the 5th Precinct of the Chinatown Police in Manhattan quickly launched an investigation and reported it as “terrorist threats” case. After preliminary judgment, this case ”may have been a mean joke for April Fool’s Day”, obviously this statement doest not placate the public.

Elected officials publicly denounce hate speech against Asians on social media, democratic city council speaker Corey Johnson called it  “absolutely vile”, “Social media attacks against Asian Americans are wrong and shocking, and hate speech has no place anywhere”. City Councillor Yaming Gu said that whether it was an April Fool’s Day prank or joke, it is very disturbing. Even after the post has been taken down, threats, hate and dangerous pranks against the Asian community will not be tolerated.

Celebrity effect and Political games

In fact, it is not just extremists, terrorist groups or radical users who incite hate speech on social platforms, leaders and senior officials of many countries have also made Anti Asian remarks on social media, which intentionally or unintentionally encouraged hate crimes, racism and xenophobia. Hate speech has become a new political tool. The “Chinese virus” referred to by former U.S. President Donald Trump, and the “Wuhan virus,” referred to by Secretary of State Mike Pompeo have fuelled hate speech to flourish in the United States during the pandemic. The mayor of Italy’s Veneto region once told reporters that Italy would handle the epidemic better than China because Italians “have a culture of strong concern for personal hygiene and love to wash hands and bathe, and we have all seen the Chinese eat rats alive.” He later apologised. The Brazilian minister of education also mocked the Chinese on Twitter, saying the pandemic was part of the Chinese government’s “plan for world domination.”

Read more: A farewell to @realDonaldTrump, gone after 57,000 tweets

At the same time, individual media figures with tinted views are playing ball with politicians. From the publication of an article titled “China is the real sick man of Asia” in The Wall Street Journal (Walter, M., 2020), to the fact that multiple hosts on Fox News referred to the virus as “Wuhan virus”, these media has betrayed journalistic professionalism and became the “mouthpiece” of Washington politicians and the accomplice of “hatred”, which rapidly promoted racial discrimination against Asians among Americans. Exacerbating division and antagonism in the U.S. society.

Pain points of governance

  1. Difficulty in unifying the definition of hate speech
  2. The conflict between hate speech and freedom of expression remains unresolved
  3. The game between platform publicity and commerciality

As mentioned earlier, there is currently no unified definition of “hate speech”, different countries have different standards for identifying hate speech. Even though Facebook and many other social media are American companies, their social impact occurs all over the world, and how to coordinate governance standards is a huge challenge.

It is certain that freedom of expression is not a right that can be exercised arbitrarily. Although freedom of expression is a fundamental human right protected by international law, it does not necessarily mean absolute freedom. Just like the exercise of other human rights, even different countries advocate different values, the exercise of rights needs to be controlled within reasonable limits.

With the increasingly monopoly of Internet platforms, when the public services provided by the platform pose a threat to its commercial interests, the platform tends to shift towards action that close to the commercial, and the profit driven nature of operators determines that it is perhaps impossible to truly regulate contents from the perspective of social and public interests, which objectively leads to the fragility of platforms self-discipline behaviour.

Artificial intelligence detection —— a long way to go

With the continuous expansion of Internet usage, billions of content are generated every day, platforms have to rely on artificial intelligence to help with content moderating, but are these automated content decisions truly effective in govern hate speech?

“Our goal is to spot hate speech, misinformation, and other forms of policy-violating content quickly and accurately, for every form of content, and for every language and community around the world.” said Facebook’s chief technology officer Mike Schroepfer (Sam, S., 2020).

However, Facebook’s artificial intelligence software still finds it difficult to detect certain content that violates its policies, which is not surprising given how hard it is to build AI that can understand the nuances of natural language. For example, the software struggled to recognise the meaning of a combination of image and text, and it wasn’t always accurate in identifying irony and slang. But in many cases, humans can quickly determine whether a piece of content violates Facebook’s policies.

In May 2020, Facebook launched the Hateful Memes Challenge, a competition to detect malicious memes using AI, with a prize of a hundred thousand USD to encourage researchers to develop algorithmic systems that can identify malicious memes. The challenge leaderboard showed that even the best-performing AI systems were inferior to manual review in recognising memes with offensive meanings. After all, detecting such memes is a multimodal problem that requires a comprehensive judgment of the image, text and the relationship between them, which poses a great challenge to advancing the classification learning of artificial intelligence.

What can be done now

Due to the lack of clear and unified conceptual scope of hate speech, and the understanding of it varies among countries around the world, different online hate speech governance strategies that are in line with national conditions should be developed based on the cultural backgrounds of different countries. When relevant departments make these rules, they could collaborate with other stakeholders, for example the civil society organisations, academics, and government agencies, it can help to ensure that policies are evidence-based and effective, and that they reflect the needs and perspectives of diverse communities. Introducing celebrity-related speech governance policies may considered necessary, when celebrities violate relevant regulations, they will face higher fines or penalties, this may effectively prevent public figure using their influence to incite hatred.

Finding the balance between manual and artificial intelligent is a key to dealing with complex and sensitive content. As the current artificial intelligent development does not have the complete function of context analysis, platforms should not be solely handled by AI decision systems. And the nature of human emotions and morality makes many issues vague, making decisions on these contents relying on AI may lead to misjudgments, making marginalised voices unable to expressed freely. 

In addition to content deleting and account banning, platforms can also:

  • Warnings by classifying the age group to avoid young users from lacking independent judgment and being affected by harmful content
  • Warnings if the algorithm analyses content that may involve violations when users publish it,  and alerting the possible of reduce visibility
  • Labelling suspicious content by users and automatically report to system after reaching a certain amount

Overall, hate speech erodes the common values of humanity and seriously poisons the cooperation of the international community. It is urgent to effectively promote the regulation of hate speech and online harms. When we see these violent warnings, we should immediately report them to relevant departments and officials, and stand firmly oppose these hatred and violence.


Aamer, M. & Jill, C., (2021, January 9). A farewell to @realDonaldTrump, gone after 57,000 tweets, AP NEWS.

Alex, H. (2016, May 31). Facebook, YouTube, Twitter and Microsoft sign EU hate speech code, The Guardian.

Eleanor, R. (2018, August 8). Twitter CEO Jack Dorsey defends failure to ban Alex Jones, The Guardian.

Emily, E. (2017, November 8). 82% Say It’s Hard to Ban Hate Speech Because People Can’t Agree What Speech Is Hateful, Cato at Liberty.

Flew, Terry (2021) Regulating Platforms. Cambridge: Polity, pp. 91-96.

International Covenant on Civil and Political Rights 1966, General Assembly resolution 2200A (XXI), Article 20, paragraph 2.

Los Angeles Times (2020, March 19). Trump calls coronavirus ‘the Chinese virus’, YouTube.

Meta AI (2020, May 12). Hateful Memes Challenge and dataset for research on harmful multimodal content.

Nextshark (2020, April 1). NYPD Investigating Disturbing Mass Shooting Threat in Chinatown on Instagram.

Parekh, B. (2012). Is there a case for banning hate speech? In M. Herz and P. Molnar (eds), The Content and Context of Hate Speech: Rethinking Regulation and Responses (pp. 37–56). Cambridge: Cambridge University Press. 

Sam, S. (2020, November 9). Facebook claims A.I. now detects 94.7% of the hate speech that gets removed from its platform, CNBC.

Sinpeng, A., Martin, F., Gelber, K., & Shields, K. (2021, July 5). Facebook: Regulating hate speech in the Asia Pacific. Final Report to Facebook under the auspices of its Content Policy Research on Social Media Platforms Award. Dept of Media and Communication, University of Sydney and School of Political Science and International Studies, University of Queensland.

Walter, M. (2020, February 3). China Is the Real Sick Man of Asia, Wall Street Journal.

Be the first to comment

Leave a Reply