Combating Hate Speech in the Digital Era: A Closer Look at the Challenges and Solutions on Facebook


The rapid development of the internet and social media platforms has significantly amplified the visibility and impact of hate speech which affect individuals and communities worldwide (Rachel, 2022). This phenomenon has drawn significant attention, particularly in the Asia Pacific region, where the diverse socio-political landscape presents unique challenges in identifying and moderating hate speech (Sinpeng et al., 2021). Facebook, one of the biggest social media platforms, has acknowledged the scale and scope of this issue and has been working towards improving its machine-learning detection filters, human moderation operations, and content moderation policies (Meta, 2020). Despite these efforts, Facebook still find it difficult to address hate speech content due to the dynamic speech environments, language diversity and varied social and cultural contexts (Murphy, 2020).

(Dietz & Hammond, 2021)

In February 2021, Myanmar’s military carried out an unconstitutional coup, and overthrew the government led by Aung San Suu Kyi and her National League for Democracy. This coup sparked widespread online and offline protests in Myanmar while the military detained approximately 8,000 citizens since the coup. In this case, social media has been pivotal for inciting violence. Myanmar’s military used Facebook to spread propaganda and hate speech against protesters, human rights defenders, and political leaders who were supporting the coup (Morada, 2023). As anti-coup protests escalate into armed conflicts, there is a rising trend of hate speech and calls for violence from protesters directed at Myanmar’s military and police forces. Some might justify the use of inferior terms like “dogs” towards soldiers and police due to their violent actions against civilians.

Have you ever thought why hate speech has such power to harm someone? Why is it happened? And how could Facebook act better to regulate and moderate hate speech? First, we need to understand what is Hate speech.

Understanding Hate Speech: More Than Just Words

“Hate speech transcends mere offensive language that inflicts harm, discriminates, and marginalizes individuals based on their membership in a particular group”

Hate speech can be defined as a form of expression that necessitates a policy response because of the damage it causes. It is generally recognized that hate speech causes harm in ways that are similar to physical injuries, which highlights the need for measures to address its impact (Sinpeng et al., 2021). Therefore, hate speech transcends mere offensive language that inflicts harm, discriminates, and marginalizes individuals based on their membership in a particular group (Hietanen & Eddebo, 2023). This form of speech is not only harmful in the immediate sense but also perpetuates structural inequality and infringes on individuals’ rights, echoing broader societal discriminations.

According to Gelber’s definition of hate speech, the characteristics of hate speech primarily include 4 points:

(1) The speech must be made publicly;

(2) It must target a member of a systemically marginalized group;

(3) The person making the statement must have the authority to deliver such speech;

(4) The speech must act as a form of subordination, embedding structural inequality into the context where the speech occurs.

The first point indicates that hate speech must be in a context where it’s reasonably foreseeable that others will encounter the speech unintentionally. It distinguishes hate speech from private conversations, which should not be subject to regulation. The second point specifies that only those groups facing systemic marginalization are eligible for protection under hate speech laws. The third point clarifies that the authority to engage in hate speech can be formally or informally derived and occurs structurally when the speaker’s words generate systemic discrimination towards the marginalized group (Gelber, 2021). Lastly, the fourth point describes hate speech as a mechanism that categorizes the targets as inferior, legitimizes discriminatory actions against them, and strips them of their rights and powers.

(Sinpeng et al., 2021)

Meta recently attempted to explain its definition of hate speech on Facebook (Meta, 2022). It divides hate speech into three categories. Discrimination is easy to understand. Inferiorisation means someone labels targets as insects and so on. For example, you call someone pigs or dogs to insult them. Deprivation serves the same objective but is regarded less harmful (Hietanen & Eddebo, 2023). It means deprive someone’s right, like Our nation has many pressing issues that are priorities rather than improving women’s rights.

Therefore, understanding these facets is crucial for developing effective strategies to regulate hate speech and mitigate its impacts.

But how to regulate hate speech? First is national legislation.

National legislation

Hate speech laws are a complex set of rules or proposed rules that draw from constitutional law as well as penal codes and civil laws (Sinpeng et al., 2021). These laws often struggle to control hate speech in different countries. To combat the increase in hate speech, some countries use related laws, like those covering cybercrimes, telecommunications, and safe spaces. Specific hate speech laws have been suggested in countries like Myanmar, but political instability has prevented these proposals from becoming law. As a result, very few legislations directly address hate speech for various reasons.

The first reason is hinder freedom of speech. Controlling hate speech through legal means is challenging due to the constitutional right to free speech in each country. The ability of individuals to freely express their opinions, including personal or political beliefs, impacts government efforts to limit hate speech. As a result, critics, political opponents, and those in authority often view direct hate speech laws, or laws that indirectly target hate speech through other legal channels, as threats to freedom of speech (Brown, 2019). Therefore, legislation specifically addressing hate speech, or amendments to existing laws considering hate speech, frequently fail to progress beyond the proposal phase. For example, in 2019, the Philippines proposed a hate speech law, but political issues have repeatedly delayed its enactment. Meanwhile, countries like Myanmar and Indonesia are creating and implementing laws to combat what they classify as ‘terrorism’, which includes the alleged use of hate speech by terrorist groups. However, these laws are often used by governments to strengthen their own power by inaccurately labelling anti-government comments as hate speech. This approach not only restricts freedom of speech but also suppresses widespread public opinion and criticism of the government.

The second is countries need to consider the cultural, religious, and ethnic diversity of their regions when addressing hate speech. As a result, they may use a variety of legal measures tailored to this diversity. While there are laws and proposals designed to tackle hate speech, they are not always used effectively. These legal tools are meant to prevent hateful acts, both spoken and unspoken, against religious groups, ethnic minorities, diverse cultures and peoples, and individuals based on their gender and/or sexuality (Sinpeng et al., 2021). In 2017, Myanmar introduced a third draft of the “Interfaith Harmonious Coexistence” bill, which included definitions of hate speech. This proposed legislation aimed to criminalize hate speech directed at different religions and faiths. While these interfaith harmonious bills are designed to identify and address hate speech, they sometimes blur the lines between hate speech and blasphemy, which can lead to confusion in their application.

This lack of clear legal guidance results in insufficient regulation of online hate speech, exacerbating the challenges faced by platforms like Facebook in moderating content effectively.

Platform regulation in Facebook

(Sinpeng et al., 2021)

The governance of hate speech is also inseparable from the regulation of social media platforms. As we can see, Facebook uses a detailed system involving their Public and Content Policy, Global Operations, and Engineering and Product teams to moderate hate speech. To effectively identify and moderate hate speech, Facebook increasingly relies on local insights provided by their country experts or ‘market specialists’, outsourced content reviewers, and trusted partner organizations (Meta, 2020). These groups are crucial for recognizing how hate speech evolves. They also help explain why certain images or memes, which might seem harmless, are reported more frequently by users when they start being used by hate groups or in contexts involving hate speech.

Despite improvements, the platform’s global standards often fail to capture the varied nature of hate speech. There are several challenges to the effectiveness of this system. First, in authoritarian or highly politically polarized countries, it’s difficult to find appropriate partners or specialists who can reliably manage content without bias. Second, there is a lack of public information about the program. That makes it unclear to civil society organizations (CSOs) whether they qualify to act as trusted partners, even if they are willing and meet the necessary criteria. Third, there is limited transparency regarding which organizations Facebook collaborates with and how everyday users or community groups can contact these entities to escalate serious reports of hate content (Sinpeng et al., 2021).

Case Study

(Rao & Atmakuri, 2021)

The Myanmar case underscores the challenges Facebook faces in identifying and moderating hate speech in different languages and cultural contexts. Following the coup, the platform became a venue for widespread dissemination of hate speech against protesters, ethnic minorities, and even the military. This situation was particularly detrimental for ethnic groups who have long faced discrimination and were further targeted amid the political chaos. The military’s strategic use of Facebook to spread propaganda and hate speech against its protesters demonstrates the role of ‘authority’ in the propagation of such speech, perpetuating systemic inequalities and suppressing opposition.

What’s more, Facebook acknowledged its shortcomings in preventing the misuse of its platform to prevent violence and spread hate speech in this case. In response, the company undertook several measures. The first is increased the number of Burmese-speaking content moderators who understand the local context. The second is prohibited individuals and military leaders who breached its policies against violence incitement.

Although Facebook enacted various strategies to combat hate speech, many people considered that these measures were implemented too reactively and even failing to prevent the escalation of violence (Morada, 2023). This case also highlights the limitations of the existing legal frameworks in the region to effectively address hate speech. Enacting legislation against hate speech is quite difficult in Myanmar due to the country’s political dynamics. These challenges highlight the urgent need for reforms that are able to address the complexities of hate speech in contexts. All of this underscores the importance of creating legal frameworks that are resilient and adaptable to various governance challenges.


In conclusion, hate speech on digital platforms like Facebook poses complex challenges that demand multifaceted strategies to manage effectively. Efforts to curb online hate speech not only require technological improvements in detection and moderation but also includes a robust emphasis on developing legal frameworks. Collaborative initiatives involving governments, civil society, and technology companies are vital to fully understand the dynamics of hate speech and to formulate approaches that balance the protection of free speech with the safety of communities. In order to combat hate speech on social media sites like Facebook, it is necessary to continuously adjust to the dynamic environment of digital communication. By cultivating a cooperation and shared insight, we can better manage the harmful impacts of hate speech and foster an online environment that is more inclusive and respectful.


Brown, A. (2019). The Politics of Hate Speech Laws (1st edition.). Routledge.

Dietz, K., & Hammond, C. (2021). Three months on from the military coup, what should the international community do to support the people of Myanmar? [Image]. In global witness.

Gelber, K. (2021). Differentiating hate speech: a systemic discrimination approach. Critical Review of International Social and Political Philosophy, 24(4), 393–414.

Hietanen, M., & Eddebo, J. (2023). Towards a Definition of Hate Speech—With a Focus on Online Contexts. The Journal of Communication Inquiry, 47(4), 440–458.

Meta (2020). Sharing Our Actions on Stopping Hate. Meta.

Meta (2022). Hate speech. Transparency Center.

Morada, N. M. (2023). Hate Speech and Incitement in Myanmar before and after the February 2021 Coup. Global Responsibility to Protect, 15(2–3), 107–134.

Murphy, Laura.W (2020). Facebook’s Civil Rights Audit Final Report.

Rachel Tan. (2022). Social media platforms duty of care – regulating online hate speech. Australasian Parliamentary Review, 37(2), 143–161.

Rao, A., & Atmakuri, A. (2021). The Role of Social Media in Myanmar’s CDM: Strengths, Limitations and Perspectives from India [Image]. In National University of Singapore.

Sinpeng, A., Martin, F. R., Gelber, K., & Shields, K. (2021). Facebook: Regulating Hate Speech in the Asia Pacific. Department of Media and Communications, The University of Sydney.

Be the first to comment

Leave a Reply