Online hate speech, how can platforms have better ways to governance


The internet has developed for a long time, it connects the world like the spider net. Most of the information people receive is from the internet. In web 2.0. It connects the world like the spider net. Most of the information people receive is from the Internet.  More and more people can interact with others online, while the existence of Internet social media further expands the possibility of people’s social circle, and users are allowed to communicate and communicate with more people. The space of Internet social media has created a public sphere, which means that people can say whatever they want on social media platforms and may meet strangers who hold different views from theirs. As people become more interactive online, the development of Internet platforms also raises new issues, such as privacy protection, Internet security and the regulation of hate speech. This blog post focuses on how the Internet platform can effectively control online hate speech through Facebook’s governance of hate speech in Indonesia.

Figure 1 online hate speech

(From:YLE News, 2017)

What is online hate speech

Online hate speech is one of the most common and urgently needed problems in Internet platforms. On average, 42% of people aged 15 to 30 were exposed to hate speech online, mainly through social media such as Facebook, YouTube and Twitter (Miskolci, et al, 2020, p128). Hateful language can be defined as Hateful language about a particular characteristic or set of characteristics, such as race, nationality, gender, religion, national origin, or sexual orientation, for the purpose of expressing, encouraging, or inciting other people (Flew, 2021, p115). At the same time, hate speech can take many forms. Hate speech on social media in particular can include hate posts, comments, images or videos, which are published publicly or posted in closed groups on different social media platforms, such as Facebook or Twitter, or private messages sent to the inbox of the target individual or group(Miskolci, et al, 2020, p129).This means that offensive comments can be made not only in public forums, but also in private conversations. Online hate speech is easy to spread because anyone with access to the Internet can create, post, and spread hate material that affects others in a short period of time.

The effect of online hate speech

For Internet platforms, the existence of online hate speech will have a serious negative impact on the users of these platforms. Hate speech is contrary to human rights principles and its spread is growing exponentially in the online world. The Council of Europe pointed out that although the possibility of human interaction has exploded with the emergence of the Internet, there are also people who are less tolerant of individual differences. If the limit of this intolerance is not paid attention to, then intolerance and hatred will be expressed in people’s words and deeds. Hate speech is repugnant because it fosters mistrust and hostility in society (Flew, 2021, p115). The hate speech is generated because some people have no tolerance for content inconsistent with their own views or cultural differences on the Internet platform, so they publish some inflammatory and offensive language.

The recurrent and high frequency of hate speech in society can cause people to create negative stereotypes about groups and organizations, that some groups or minorities are truly inferior and should be treated accordingly (MisRight kolci, et al, 2020, p129). For the minority groups attacked by hate speech, such speech will intensify their social marginalization, and for some vulnerable groups, social marginalization will aggravate their overall social discrimination. For  users of social media, they may be misled and have negative stereotypes about a particular minority group after viewing hateful speech about them. The existence of hate speech online can also cause physical and even psychological harm to people and lead to irreversible consequences. 

 Figure 2 the effect of online hate speech

(From:Cyber Stories – Online Hate Speech)

In recent years, negative events caused by the spread of hate speech on the Internet have emerged repeatedly, including offline violence and even deaths caused by online hate speech. Take South Korean artist Sulli for example. As an artist, she often needs to update her latest situation on social media platforms to interact with fans. However, her comments section is often filled with different offensive comments on her, including some about misogyny. The gender-based hate speech fuelled Sherry’s bullying (Kang,2022, p.467). Sulli suffered from severe depression because of being bullied online for so long on the Internet that she committed suicide in 2019. This incident not only resulted in the death of Sulli alone. Due to the particularity of her status, as an influential celebrity on the Internet, more than 1,000 imitation cases of her suicide appeared in South Korea after she experienced the attack of hate speech and committed suicide. Based on this situation, the South Korean government at that time required the news media to reduce the details of this incident (Kang,2022,p.466). This is a tragedy caused by online hate speech based on gender and identity. In addition, there is hate speech against minority groups and political events, such as the large amount of hate speech about Asian people during covid-19, which not only causes online voices to denigrate Asian people, There have also been a number of incidents of abuse against Asians offline. In this regard, social media like Facebook and Twitter play a vital role as communication channels and that is why social media platforms should regulate hate speech online. However, due to the large number of people using social media platforms, social media such as Facebook and Twitter should take effective regulatory measures to effectively reduce the occurrence of hate speech.

Case study

In the opinion of most users, Facebook has failed to regulate hate speech in the Asia Pacific region. For LGBTQ Indonesians, their true identity is rarely acknowledged outside close-knit social circles in a Muslim-majority country due to regional cultural reasons. This has led to hate speech against such minority groups online. In addition to cultural reasons, another reason for the surge in hate speech against the LGBTQ community in Indonesia is that hate speech against the LGBTQ community has multiplied in power and influence as conservative Islamic organizations during Jokowi presidency. Public comments and threats by anti-LGBTQ  government officials and the proliferation of LGBTQ radicals in Indonesia identified by the National Council, including Islamists and mainstream religious organizations, have all contributed to Indonesia’s shift towards a more Islamic country. Anti-LGBTQ rhetoric by leaders and national organizations has greatly increased discrimination against LGBTQ people (Sinpeng, et al, 2021, p.22).

Hate speech against the LGBTQ community in Indonesia falls into three categories, including

• religion-based hate speech comments, such as :” Gay men go to hell “, including how they should go to hell and that they are being punished for going against God’s will.

• Some hate speech comments were about direct threats, and physical violence against LGBTQ+ people, such as being stoned and beheaded.

• Suggestions on the composition of homosexuality, which characterise them as deviant behaviour, show the classic narrative of Indonesia’s political and religious elite for a long time (Sinpeng, et al, 2021, p.22)

Hate speech on Facebook is not only active on private accounts where some of them have publicly identified themselves, The Facebook pages of groups that protect Indonesia’s LGBTQ community, such as Suara Kita and Yayasan GAYa NUSANTARA, are also flooded with hate speech. This phenomenon leads people in this group to believe that when they are the target of hate speech, the platform does not provide adequate protection and support to help reduce hate speech about them (Blake, 2021).

The approaches Facebook has taken to reduce and police hate speech in Indonesia are mainly through the use of AI to capture and intercept keywords and moderated by administrators. Facebook’s AIi system automatically detects posts that may contain derogatory or negative words and automatically removes them. According to a report released by Facebook, bots’ smart blocking features help remove nearly 90 percent of hate speech content from Facebook. The task of automatically deleting the speech was completed even before it was reported (Facebook, 2020). Even though the AI system could help Facebook administrators more easily and quickly detect possible hate speech, hate speech directed at the LGBTQ community in the region is growing at a faster rate and changing in content. Facebook’s algorithm needs typically rely on machine learning techniques and are customized for each type of content, such as images, video, audio and written content. However, Facebook’s algorithm couldn’t keep up with the pace of changes, resulting in some bad comments and posts that escaped the ai’s scrutiny and appeared on the platform without a hitch. In my opinion, the second reason leading to the failure of AI governance is that hate speech on Facebook is often more than text content, but more memes or video content composed of text and pictures. The harmful content is delivered via video rather than text. Facebook’s AI system clearly cannot immediately detect these remarks and flag these posts for deletion (Sullivan, 2020).

Figure 3 hate speech shown as meme


For content moderators, there is no systematic training on identifying hate speech. Facebook’s training materials for local content managers are not based on the local language but in English. These documents are full of technical terms without any logical order, and the training has no substantive effect on the administrators (Marantz, 2020). This means that local content moderators still have a vague definition and governance approach to hate speech, which has resulted in Facebook’s content moderators in Indonesia not intervening more in the presence of hate speech on the platform.

How can internet platforms govern online hate speech more effectively

The first is to strengthen technological tools for policing hate speech online. For Facebook’s monitoring AI, Facebook needs to consult regularly and widely with the groups most affected by hate speech, such as LGBTQ and ethnic minority users. By collecting information about hate speech that they know or receive and feeding that information into the AI system to be upgraded to respond to different kinds of different variations of hate speech. For Internet platforms, using AI to control hate speech also needs to pay attention to how to balance users’ freedom of speech with the control of hate speech. This involves the transparency of decisions about ai systems. From the user’s perspective, there is little transparency about what content posts or comments ai automatically decides to delete. For Internet platforms, This can reduce decision-making transparency by allowing ordinary users to understand the governance patterns embedded in ai systems (Gorwa, et al., 2020, p.12). After understanding the governance model of ai, users will know what kind of content and speech may be blocked or deleted. For users, they will also regulate their uploaded and published speech according to this governance model.

Secondly, increase the training of the community, both the content managers of the community and the users in the community. The example shows that Facebook’s training of content moderators in Indonesia is not enough to help them identify what content can be classified as hate speech. Content administrators often need to use their own judgment and moral intuition to determine whether the content posted by users in the community is hate speech (Marantz, 2020). Content moderators in non-English speaking areas, they need more standardized and detailed training to understand the definition of hate speech and how to properly intervene in hate speech on the platform. Training for users can be done by guiding users to abide by the norms and rules of the social network and not to post hate speech. The platform can enhance users’ legal awareness and moral quality by strengthening user education and providing relevant information and educational materials. Regulation of users’ behaviours can also better ensure that the occurrence of hate speech is reduced.

The last way to help Internet platforms effectively manage hate speech is to cooperate with other social organizations or institutions to combat online hate speech. Cooperating with trusted organizations can help the platform better understand the types and contents of hate speech in different regions, and positively influence hate speech in different regions according to different languages and regional cultures. Take Facebook’s management of hate speech against the LGBTQ community in Myanmar as an example. Facebook works with the United Nations and other organizations to specifically address hate speech in the country. This behaviour has helped broaden the definition of hate speech, which has had a positive impact on harmful content (Blake, 2021). It also effectively reduced the amount of hate speech posted on Facebook about the LGBTQ community in Myanmar.


Blake, E. (2021, July 5). Facebook still allowing hate speech on public pages. The University of Sydney.

Flew, T. (2021). Regulating platforms. John Wiley & Sons.

Gorwa, R., Binns, R., & Katzenbach, C. (2020). Algorithmic content moderation: Technical and political challenges in the automation of platform governance. Big Data & Society, 7(1), 2053951719897945.

Kang, N. G., Kuo, T., & Grossklags, J. (2022, May). Closing Pandora’s Box on Naver: Toward Ending Cyber Harassment. In Proceedings of the International AAAI Conference on Web and Social Media (Vol. 16, pp. 465-476).

Marantz, A. (2020, October 12). Why Facebook Can’t Fix Itself. The New Yorker.

Miškolci, J., Kováčová, L., & Rigová, E. (2020). Countering hate speech on Facebook: The case of the Roma minority in Slovakia. Social Science Computer Review, 38(2), 128-146.

Sharing Our Actions on Stopping Hate. (2020, July 2). Facebook for Business.

Sinpeng, A., Martin, F. R., Gelber, K., & Shields, K. (2021). Facebook: Regulating Hate Speech in the Asia Pacific. Facebook Content Policy Research on Social Media Award: Regulating Hate Speech in the Asia Pacific.

Sullivan, M. (2020, August 14). Facebook’s AI for detecting hate speech is facing its biggest challenge yet. Fast Company.

Be the first to comment

Leave a Reply