“Giving a baptismal name is worse than eating pork.” This is the statement that appears at the beginning of a satirical Christian video that was posted on TikTok by user @logikapolitik during the Indonesian presidential election, as a means to criticize one of the presidential candidates. The video received 65.8k likes, highlighting the prevalence of hate speech against religion on social media during elections. As Indonesian presidential candidate Anies Baswedan commented, “Religious issues come up when there is a Muslim candidate and a Christian candidate (Handley, 2023).”
This emphasises the grave issue of racial and political hate speech proliferating on social media. Hate speech with a religious motivation can exacerbate societal separation, hostility and mistrust, and even violence. We will dive into discuss how social media can generate hate speech in this post, as well as the steps taken by platforms and governments to combat it.
How Social Media Becomes a breeding ground of Hate Speech During Political Elections
Social media has become an important platform for propaganda and public opinion contestation during political elections. Especially in multi-ethnic countries, gaining support from certain ethnic groups during elections is the pinnacle of political competition. Candidates and supporters often use various means, including utilizing internet technology and hashtags, to increase their propaganda effectiveness and influence. However, these methods are often prone to abuse, resulting in various negative consequences such as malicious attacks and spreading of rumors and defamation, ultimately becoming an important driver of hate speech and ethnic conflicts.
“Cyberarmies” refers to the use of digital technology to register fake social media accounts in large numbers, for the purpose of quickly disseminating opinions, discriminatory or even false information (Jalli, 2023b). These Cyberarmies are often hired by political forces during elections to manipulate the views and behaviors of voters by creating fake news and spreading hate speech. A typical example is during the 2022 political elections in Malaysia, where riot videos were suddenly spread widely on TikTok to incite anti-Chinese sentiment (Hui, 2020). These tactics are not only covert, but also fast and widespread, enabling political forces to influence the views and behaviors of a large number of people in a short amount of time. Social media’s hashtag feature is also a common tool for them. The hashtag feature enables users to link to any other text of a trending event simply by using the “#” symbol before the relevant keyword or phrase of the text. This mechanism can divide people into groups with different interests and values. However, if individuals only see information that aligns with their own beliefs, they may develop cognitive biases and misunderstandings, creating information echo chambers. Moreover, hostility and animosity between these groups can intensify, leading to the spread of hate speech. Hashtags can act as a breeding ground for hate speech, as users are more likely to see posts that align with their views, filtering out opposing political opinions. For example, during the 2022 Malaysian general election, riot videos with the hashtag #antiChina were widely disseminated on TikTok to inflame anti-China sentiment (Hui, 2020). Similarly, in Ethiopia, a country known for its ethnic diversity, different political factions use social media to manipulate discussions about ethnic politics. This phenomenon has reached a boiling point, with both government supporters and Tigray activists and their supporters spreading hashtags like #TigrayGenocide and #TPLFisTheCause on Twitter and Facebook to incite extreme ethnic hatred. Social media algorithms also exacerbate the echo chamber effect, as platforms like TikTok recommend content based on users’ interests, causing them to see only posts that align with their beliefs (Carson, 2021). As a result, users become more biased and only accept and support their own views, without considering other perspectives.
In short, the anonymity, secrecy, interactivity, timeliness, shareability, and clustering features of the internet make it easy to spread rumors and extreme speech. When it comes to ethnic identity, emotion, and belief, national consciousness is strengthened and aroused. If not regulated and stopped, this can lead to intense expression and confrontation, even causing serious ethnic division and offline violence. An assessment of the human rights impact commissioned by Facebook confirmed that there is a connection between posts on Facebook and offline violence in Myanmar (BSR, 2018). Therefore, social media has become a breeding ground for hate speech and effective regulation and measures must be taken to reduce this harm.
How to Tackle Hate Speech? Efforts and Challenges of Social Media Platforms
Major digital platforms have acted to monitor and control hate speech in reaction to the recent rise in such discourse online. The biggest social networking site in the world, Facebook, has been vigorously policing hate speech on its space. The AI system used by Facebook to identify hate speech has increased from 52% in 2018 to 82% in 2021 (Hovsepyan, 2021; Zuckerberg, 2018). Twitter’s Hateful Conduct Policy, YouTube’s Community Guidelines, and TikTok’s Community Guidelines all provide clear definitions of hate speech and use algorithms to flag harmful content. Additionally, due to the difficulty of achieving perfect results with AI algorithms, these platforms have hired a large number of content moderators to manually review relevant content. When sensitive content or accounts are flagged, blocking and removal are the most common methods used by the platforms. TikTok, for example, blocked thousands of riot videos disseminated during the Malaysian election (Jalli, 2023a), and Twitter suspended accounts that engaged in mass tweeting and copying and pasting during the hostile confrontation between two opposing factions in Ethiopia (Wilmot et al., 2021).
However, regulating hate speech related to ethnic issues through deletion and blocking is limited.
While Facebook’s AI system can delete nearly 99% of terrorist propaganda and nudity, hate speech recognition was only at 82% as of 2021 (Hovsepyan, 2021). @logikapolitik’s video mocking Christianity remained on the TikTok platform for over three months (Jalli, 2023b). The platform cannot 100% control hate speech related to ethnic issues due to the complexity of language symbols. On the one hand, digital platforms can only block using prohibited words in their databases (Giansiracusa, 2021). However, users can communicate extremist information through other means that are difficult for the system to identify, such as using memes or encrypted text (Johnson et al., 2019). On the other hand, people’s standards for distinguishing hate speech, offensive speech, and pure political views may differ. According to 82% of respondents in a 2017 survey on “tolerance of hate speech,” it is difficult for people to agree on what constitutes hate speech, which is a key factor in why hate speech cannot be outlawed (Ekins, 2017). Even with human review, it is challenging to reliably distinguish the limits of hate speech. Moreover, digitized platforms may face conflicts of interest when regulating hate speech. For example, during the Malaysian election, TikTok allowed commercial partnerships for political content (Jalli, 2023a). In addition to these apparent conflicts, there may be undisclosed partnerships between platforms and political parties. As we know, user engagement is a valuable asset for digital platforms, and they may intentionally leave some “loopholes” to ensure freedom of expression and a positive user experience to prevent user churn. Furthermore, there may be undisclosed partnerships between platforms and political parties. For example, Facebook was once involved in a scandal that allowed algorithms to influence presidential elections. The existence of these gray areas makes the behavior of digital platforms more complicated, requiring us to delve deeper into exploring and analyzing them.
Therefore, relying solely on the management methods of digital platforms to solve the problem of hate speech is not enough.
Government Intervention: Efforts and Challenges in Combating Hate Speech
Hate speech regulation has been a significant topic of international concern. As early as 1948, the Universal Declaration of Human Rights declared that everyone has the right to be protected from discrimination. The United Nations has created numerous strategies to combat hate speech in response to the growing incidence of hate speech online. The UN published the “Strategy and Plan of Action on Hate Speech” in 2019 and offered more thorough guidelines for regulating hate speech.. At the national level, different countries have different approaches to hate speech regulation due to their values and social cultures.
In developed countries, the regulation of hate speech varies. For example, the United States, which strongly advocates “freedom of speech,” takes a relatively tolerant attitude towards hate speech and has not written a ban on hate speech into federal legislation. In contrast, European countries represented by Germany and the United Kingdom have adopted relatively strict control measures and written hate speech regulation into legal provisions. In addition, digital platforms that fail to take timely measures to control hate speech may face fines and penalties of varying degrees. Recently, Twitter faced fines of up to $30 billion for alleged violations of German hate speech laws (Jayshi, 2023). In non-developed countries with more complicated ethnic issues and internal conflicts, in addition to enforcing social media through laws and regulations, they also place great emphasis on soft management. The Hate Speech and Disinformation Prevention and Suppression Proclamation in Ethiopia mandates the government to conduct activities to improve public awareness and media literacy in identifying false information while enforcing social media control through laws. Moreover, to change the monopoly of Western social media platforms, many non-developed countries are actively building domestic public affairs social platforms. China’s Weibo is a good example as it strictly regulates speech control. To lessen foreign influence in the nation’s politics, the Ethiopian government is also actively developing its own social media platform to take the place of Facebook, Twitter, and other social media platforms (Reuters, 2021).
Although regulating social media through legal means is standard practise across the globe, it is nevertheless important to consider how to do so while upholding the right to free speech. Developing countries often resort to extreme measures to control the situation. Taking Ethiopia as an example, statistics show that in 2019, Ethiopia experienced 274 hours of internet blackout, which became one of the most frequently used means of controlling speech by the government (Hamilton, 2020). This practice not only infringes on citizens’ freedom of speech and human rights, but also deprives people of a reliable source of information during crises and emergencies, exacerbating people’s anxiety and distrust of the government. However, internet blackout cannot fundamentally solve crises such as protests and demonstrations, and excessive control may even drive more people into the dark web, an unindexed network that requires a special browser to access, pushing its content and contributors further underground. Although this means fewer people will become radicalized due to discovering the website and its content, it also means that content will be harder to monitor and regulate (Kumar & Rosenbach, 2019).
In this context, governments face a difficult task: how to develop appropriate social media management policies while protecting social stability and citizens’ rights. The establishment of an effective, sustainable, and flexible regulatory system remains a direction that many countries need to explore.
The Challenge of Combating Hate Speech: Joint Responsibility of Social Media, Government, and Citizens.
Hate speech creates prejudice and discrimination, creating divisions among social groups and attacking inclusivity and diversity. The collectivity of new media undoubtedly intensifies discrimination, affecting the social integration of certain groups, violating the equal rights of the public, causing harm to specific groups, and even leading to actual violence. Addressing hate speech is a global challenge that requires joint efforts from social media platforms, governments, and citizens. Social media platforms need to take more measures to identify, delete, and restrict hate speech, while enhancing transparency and impartiality to protect freedom of speech and citizen rights. Governments also need to participate in this process, developing appropriate regulatory policies and laws while balancing social stability and citizen rights. Most importantly, everyone should be aware of their responsibility to avoid expressing or spreading hate speech, promoting a more inclusive and harmonious society.
BSR. (2018). Human Rights Impact Assessment Facebook in Myanmar. https://about.fb.com/wp-content/uploads/2018/11/bsr-facebook-myanmar-hria_final.pdf
Carson, D. (2021). Eagle Scholar Eagle Scholar A Content Analysis of Political Discourse on TikTok A Content Analysis of Political Discourse on TikTok. https://scholar.umw.edu/cgi/viewcontent.cgi?article=1445&context=student_research
Ekins, E. (2017). 82% Say It’s Hard to Ban Hate Speech Because People Can’t Agree What Speech Is Hateful. Cato Institute. https://www.cato.org/blog/82-say-its-hard-ban-hate-speech-because-people-cant-agree-what-speech-hateful
Giansiracusa, N. (2021). Facebook uses deceptive math to hide its hate speech problem. Retrieved from https://www.wired.com/story/facebooks-deceptive-math-when-it-comes-to-hate-s peech
Hamilton, I. A. (2020). Ethiopia’s government shut down the entire country’s internet and 80 people have been killed in protests following the assassination of a popular musician. Business Insider. https://www.businessinsider.com/ethiopias-internet-totally-cut-off-following-killing-haacaaluu-hundeessaa-2020-7
Handley, E. (2023). Indonesia is one of our closest, largest neighbours. So who is Anies Baswedan, the man hoping to be its next leader? ABC News. https://www.abc.net.au/news/2023-03-09/anies-baswedan-indonesia-presidential-run-the-world-interview/102068120
Hovsepyan, T. (2021). Role of AI in Facebook. Plat.AI. https://plat.ai/blog/the-role-of-ai-in-facebook/#hate-speech-and-bullying
Hui, J. Y. (2020). SOCIAL MEDIA AND THE 2019 INDONESIAN ELECTIONS: Hoax Takes the Centre Stage. Southeast Asian Affairs, SEAA20(1), 155–172. https://doi.org/10.1355/aa20-1i
Jalli, N. (2023a). How TikTok became a breeding ground for hate speech in the latest Malaysia general election. The Conversation. https://theconversation.com/how-tiktok-became-a-breeding-ground-for-hate-speech-in-the-latest-malaysia-general-election-200542
Jalli, N. (2023b). TikTok’s poor content moderation fuels the spread of hate speech and misinformation ahead of Indonesia 2024 elections. The Conversation. https://theconversation.com/tiktoks-poor-content-moderation-fuels-the-spread-of-hate-speech-and-misinformation-ahead-of-indonesia-2024-elections-202439
Jayshi, D. (2023). Was Twitter Fined $30B for Violating Hate Speech Laws in Germany? Snopes. https://www.snopes.com/fact-check/germany-fine-twitter-hate-speech/
Johnson, N. F., Leahy, R., Restrepo, N. J., Velasquez, N., Zheng, M., Manrique, P., Devkota, P., & Wuchty, S. (2019). Hidden resilience and adaptive dynamics of the global online hate ecology. Nature (London), 573(7773), 261–265. https://doi.org/10.1038/s41586-019-1494-7
Kumar, A., & Rosenbach, E. (2019). The Truth About The Dark Web – IMF F&D. IMF. https://www.imf.org/en/Publications/fandd/issues/2019/09/the-truth-about-the-dark-web-kumar
Reuters. (2021). Ethiopia starts building local rival to Facebook. The Guardian. https://www.theguardian.com/world/2021/aug/23/ethiopia-starts-building-local-rival-to-facebook
Wilmot, C., Tveteraas, E., & Drew, A. (2021). Dueling Information Campaigns: The War Over the Narrative in Tigray. Media Manipulation Casebook. https://mediamanipulation.org/case-studies/dueling-information-campaigns-war-over-narrative-tigray
Zuckerberg, M. (2018). A Blueprint for Content Governance and Enforcement | Facebook. Facebook.com. https://www.facebook.com/notes/mark-zuckerberg/a-blueprint-for-content-governance-and-enforcement/10156443129621634/