Hate speech and racial discrimination governance on social networking platforms

Case study revolves around hate speech against Rohingya people in Myanmar on Facebook platform

Online media now possesses openness, speed, and interactivity qualities that traditional media cannot match, and the barrier to entry is reduced because to ongoing technological advancements. Due to the variety of communication topics that may be discussed online as a result of these traits, the Internet is currently the most powerful media platform. The anonymity of the Internet has lessened people’s sense of moral and legal restraints, making it simpler for them to publish inappropriate comments that they ordinarily would not dare to make as a way to express their discontent and frustrations. As the network has grown, Internet contact has transitioned from being “dotted” to being “netted.” Individuals who express their grievances and complaints are not wrong and are at most condemned on a moral level, but if they spread their hateful views to large groups and try to change the minds of others and cause confrontation between groups or even between nations and countries, then this exceeds the scope of freedom of speech and will be regulated and suppressed by many parties.

What is Hate Speech and Racism

Hate speech is often described as typically hostile speech. Hate speech is considered to be associated with negative words such as: defamation, contempt, humiliation, intimidation, incitement to violence, discrimination, threats, etc. In addition, hate speech is often understood as expressive behavior directed at real or imagined group characteristics (Richardson-Self, 2018). In addition to negative terms, some definitions of words appear to be neutral, but in fact the definitions themselves are discriminatory. For example, “race” is considered “non-white”, “sexual orientation” is considered “non-heterosexual”, etc. In other words, racist or hate speech is used to express hostility toward historically and contemporary oppressed groups, and as a way to defame, demean, discriminate, and insult. It is also important to note that oppression is usually considered to be only a group condition, so that individuals are only oppressed when they are members of a group (Frye, 1983). Hate speech is currently more of an academic debate, lacking a universally accepted and fixed jurisprudential concept. While it is difficult to develop a unified definition, it is undeniable and generally agreed by most scholars that hate speech is most commonly characterized by its potential impact – “it fosters fear, incites violence, promotes division, inculcates prejudice, and promotes discrimination”(Patni & Kaumudi, 2009). According to the website created by the International Coalition Against Cyber Hate (n.d.), hate speech is when someone intentionally or unintentionally makes derogatory remarks about another person or group of people based on their expressed race, religion, gender, sexual orientation, political beliefs, etc.

Characteristics of racism and hate speech on social networking platforms

No.1 Diversity of hate speech and racial discrimination on social networking platforms

Hate speech and other types of racial discrimination can be expressed orally, in writing, or symbolically. Social networking sites offer a variety of ways for users to express themselves, including images, music, movies, and emoticons. They also serve as venues for the distribution of pervasive and misleading information. First of all, it might be challenging to describe and regulate emoji-based symbolic speech because it frequently substitutes ridicule and banter for the outright expression of prejudice, discrimination, or hate sentiments. The complexity of controlling and managing them on social network platforms has directly increased as a result of this. Second, such hate speech or racial prejudice frequently becomes a victim of public opinion and exacerbates confrontations, which in turn spreads the speech widely and heightens the implicitly unfavourable social emotions of particular groups.

No.2 Racist or hate speech’s anonymity on social networking platforms

Racial discrimination and hate speech are typified by anonymity since persons who post such things online are difficult to find. According to pertinent data, the global Internet penetration rate was 65.6% as of 2023. (Oberlo, 2022). The Internet’s decentralised architecture has lowered the threshold for the general public to contribute content on social networking sites, leading to the following effects: First off, the volume of negative communication, such as racial discrimination or hate speech, that is published makes regulation extremely expensive in terms of both labour and resources, and is frequently only possible after a violent occurrence has taken place. Second, it is much harder to track down specific people who post speech on the Internet due to its anonymous character. Internet users have the sense of fortunate that “as long as they are anonymous, they are not responsible for their speech” because the perpetrators of crimes frequently go unpunished due to the untraceable nature of Internet IDs. Because of this sense of luck, Internet users are less accountable for their statements than they actually are, which makes it simpler for them to take part in the upsurge of hate speech.

No.3 Spread of racist or hate speech on social networking platforms expands

Distributed and decentralized is the technical logic embedded in the bottom of the Internet development. One-way communication has long been out of fashion, and two-way communication cannot describe today’s network communication. But along with it, there is also the rapid spread of bad information. The fissile communication with social relations as the node makes the spread of information unprecedentedly fast. As a result, certain groups are taking advantage of this vast network of contacts to spread hate speech quickly and widely, and to reach more people.

Case Studies Rohingya in Myanmar suffer hate speech on Facebook platform and launch lawsuit AND Facebook’s governance measures against racist or hate speech

63.1% of Myanmar’s total population over the age of 13 is a Facebook user, in other words Facebook is the source of information for the majority of Myanmar residents (Sinpeng et al., 2021). Facebook has  been increasingly challenged and boycotted in recent years for not policing racist or hate speech. In one case, Facebook has come under fire for its role in inciting violence against the Rohingya in Myanmar (Siegel, 2020). According to BBC News (2021), a British law firm representing some of the refugees allegedly sent Facebook a letter in which it was claimed that: (1) Facebook’s algorithm “amplifies hate speech against the Rohingya”; (2) the company failed to employ moderators and fact-checkers with knowledge of the political situation in Myanmar; (3) the company failed to remove posts or accounts that incited violence against the Rohingya; and (4) the company failed to take appropriate and timely action despite being informed of the situation. In spite of warnings from NGOs and other sources, the corporation did not act appropriately or quickly to stop the atrocities against the Rohingya. In 2018, Facebook said that it had not done enough to stop hate speech and incitement to violence against the Rohingya (Sinpeng et al., 2021). Before that, a reputable investigation commissioned by Facebook claimed that the site fostered the spread of human rights breaches. Facebook was willing to risk the lives of Rohingya citizens in order to get more market share in a tiny Southeast Asian nation. (BBC News, 2021).

In response to the platform’s regulatory governance of racist or hate speech, Facebook has clarified in its rulebook what content users can post within the platform, with detailed categorization, and allows users to appeal against the mistaken removal of content (Ash et al., 2019). The rulebook defines hate speech as direct verbal attacks on the characteristics of others that include discrimination or attacks on physical characteristics such as religious beliefs, ethnicity, nationality of origin, sexual orientation, race, caste, gender or disability (Facebook, n.d.). Intense rhetoric or dehumanizing statements, hurtful stereotypes, derogatory remarks, expressions of contempt, disgust or disdain for others, and calls to exclude or isolate others will be removed. In response to public outcry over terrorist and extremist content on the site, Facebook has created a team dedicated to finding and removing such content, pages and groups (Siegel, 2020). For the regulation of content such as racist or hate speech, Facebook employs content reviewers in almost every language in the world and Facebook monitors the content mainly through artificial intelligence for screening, and platform reviewers match the uploaded content with the already published database to screen out illegal content (Ash et al., 2019).

Facebook’s backend flags millions of pieces of content and users every week. Once flagged by a user, the content is waiting to be reviewed by an auditor. If the reviewer determines that the reported content violates the rules, the offending content is removed from the site; users whose content is removed are informed that they have posted content that violates Facebook’s community standards and are removed. The content of the rulebook was primarily developed by Facebook employees, but since most of the authors’ native language is English, the actual review process required administrators fluent in the local language to properly identify inflammatory comments. In practice, however, Facebook outsources much of its post review to outside companies that do a lot of simple “yes and no” work, so the review of offending comments needs to be improved.

There are a number of benefits to the Facebook platform vetting platform content itself. Firstly, applying traditional media policy and regulation to digital platforms is complex and challenging for a rapidly evolving industry like the internet (Flew et al., 2019). As a result, the informal self-regulation process makes it more flexible and, as a result, less likely to stifle innovation or excessively restrict consumer choice. Second, because the platform itself bears the cost of regulating, platform self-regulation has an incentive to lower enforcement and compliance costs. Because the expenses of rulemaking and enforcement are transferred to the platform, self-regulation is less expensive for the government. Because the platform itself is aware of the best ways to ensure quality and the efficacy of potential measures, and because the platform has access to the information they require at the lowest possible cost, platform self-regulation regulates production quality and standards.

But it would also be imperfect to rely solely on the Facebook platform itself to regulate hate speech and racial discrimination within the social platform. Platform self-regulation also has its drawbacks because companies are assumed to never act against their own interests (Ang, 2001), and thus there is shielding and omission in self-regulation. So if regulation is left to the companies themselves, it may lead to platforms subverting regulatory goals to their own business goals.

Government regulatory measures on platforms, with the example of legislation on racial discrimination or hate speech on German social networking platforms

Germany’s cautious approach to racist or hate speech is partly a sign of facing up to its history. After World War II, when some racist speech in Germany led to some genocide and crimes against humanity, German legislation also strengthened the regulation of related speech. Germany was the first country to adopt legislation to regulate racial discrimination or hate speech on online platforms. Germany adopted the Network Enforcement Act in 2018, which includes provisions on the responsibilities and obligations of online platforms, the scope of application, rules on fines, responses to appeals against illegal content, transition phases and domestic authorizations. The law further strengthens the responsibility of platforms by classifying speech content and imposing reporting obligations on platform providers. In terms of classifying speech on online platforms, the law proposes three types of speech content: the first is defamation and incitement to violence, the second is illegal speech, and the third is controversial speech (Gesley, 2017). According to its degree of harm, the law provides for different levels of sentencing standards. 

  • The platform party must remove the first type of speech as soon as possible. 
  • The platform party must remove or block the second category of speech within one day. 
  • For the third category of speech, the platform must limit the dissemination of illegal speech to the maximum extent possible after receiving a report. Social networking platforms are required to publish the number of user reports and the results of their processing every six months (Gesley, 2017).


This blog further explores the practical experience on the regulation and governance of related speech, taking the Facebook platform as an example, in the context of the government’s legislation and regulation, policy norms, and technical means for racial discrimination and hate speech on social networking platforms. The greatest approach to stop hate speech and racial prejudice online is to stand up to it using legislation and logic. To encourage the hedging mechanisms of various speech and advance the self-purification of the Internet environment, it is important to clarify the definition of legal obligations, the obligations of users and ISPs, and adopt more strategies for speech hedging. These strategies include responding to speech with speech and confronting hate speech with reason. Platforms and governments have their own merits in regulating hate speech and racial discrimination, and a better solution is to combine the two to create a good and positive social environment and provide a correct speech environment for users.

References list: 

Richardson-Self, L. (2018). Woman-Hating: On Misogyny, Sexism, and Hate Speech. Hypatia, 33(2), 256-272. doi:10.1111/hypa.12398

Sinpeng, A., Martin, F., Gelber, K., & Shields, K. (2021). Facebook: Regulating hate speech in the Asia Pacific. Final Report to Facebook under the auspices of its Content Policy Research on Social Media Platforms Award. Dept of Media and Communication, University of Sydney and School of Political Science and International Studies, University of Queensland. https://r2pasiapacific.org/files/7099/2021_Facebook_hate_speech_Asia_report.pdf

Frye, M. (1983). The politics of reality : essays in feminist theory. Crossing Press.

Patni, R., & Kaumudi, K. (2009). Regulation of hate speech. NUJS Law Review, 2(4), 749-778.

INACH. (n.d.). Cyber hate definitions. Retrieved April 8, 2023, from https://www.inach.net/cyber-hate-definitions/

Oberlo. (2022, December 14). 111 ecommerce statistics: The latest data & trends for 2023. Retrieved April 8, 2023, from https://www.oberlo.com/blog/internet-statistics

BBC News. (2021, December 7). Rohingya sue Facebook for $150bn over Myanmar hate speech. Retrieved April 8, 2023, from https://www.bbc.com/news/world-asia-59558090

Siegel, A. (2020). Online Hate Speech. In N. Persily & J. Tucker (Eds.), Social Media and Democracy: The State of the Field, Prospects for Reform (SSRC Anxieties of Democracy, pp. 56-88). Cambridge: Cambridge University Press.

Ash, T. G., Gorwa, R., & Metaxa, D. (2019). Glasnost: Nine ways Facebook could become a better forum for free speech and democracy. IDEAS Working Paper Series from RePEc. https://doi.org/10.31219/osf.io/t7y82

Flew, T., Martin, F., & Suzor, N. (2019). Internet regulation as media policy: Rethinking the question of digital communication platform governance. Journal of Digital Media & Policy, 10(1), 33–50. https://doi.org/10.1386/jdmp.10.1.33_1

Ang, P. H. (2001). The Role of Self-Regulation of Privacy and the Internet. Journal of Interactive Advertising,1(2), 1–9. https://doi.org/10.1080/15252019.2001.10722046

Gesley, J. (2017) Germany: Social Media Platforms to Be Held Accountable for Hosted Content Under “Facebook Act”. [Web Page] Retrieved from the Library of Congress, https://www.loc.gov/item/global-legal-monitor/2017-07-11/germany-social-media-platforms-to-be-held-accountable-for-hosted-content-under-facebook-act/.

Be the first to comment

Leave a Reply