Islamophobia and Hate Speech: The challenge of social media governance

Introduction

Digital platforms have become indispensable and important channels for modern communication. With the popular development of social media, hate speech and its social implications have also become a topic in the spotlight. The Internet has revolutionized the way people connect and communicate on a global scale, leading to the emergence of online culture. However, with the growth of Internet culture, digital platforms have become hotbeds of hate speech and discrimination that are difficult to govern due to their global reach and decentralized nature effectively. In recent years, the rise of Islamic panic online hate speech has gradually become a topic of worldwide concern, which has not only caused harm and adverse effects on Islamic community societies but also made the regulation of social media platforms such as Facebook a major challenge. Therefore, this blog will use Islamic panic to analyze hate speech in popular culture and explore the current state and challenges faced in hate speech governance in social media.

Islamophobic Online Hate Speech

Hate speech was born out of hostility towards a specific group of people and is commonly defined internationally as offensive and derogatory messages against individuals or groups based on specific characteristics, such as different forms of speech attacks against traits such as color, race, gender, and religion (Sinpeng et al, 2021). The Internet has created a platform for people to express their opinions and engage in online dialogue, yet freedom of expression has led to the proliferation of hate speech and discrimination against specific groups such as the Islamic community. With the widespread use of social media globally, more and more public dialogue and communication are presented in digital form, which has contributed to the formation of online hate speech (Obermaier et al, 2021). On social media, online hate speech is usually proliferated and spread rapidly in diverse forms such as text, images, and emojis, affecting a large number of people in a short period.

Source: NGO ‘Nóiz’

The United Nations defines Islamophobia as negative attitudes motivated by fear and bias against Islam and manifesting behaviors such as hatred or disparagement that negatively affect the Islamic community through online networks and the offline world (United Nations, 2023). Hate speech against Islam is essentially linked to Islamophobia, and the generation of hate speech is influenced by complex factors such as political interests, religious hostility, and ideology (Khan & Phillips, 2021). In addition, there has been a dramatic increase in Islamophobia-related hate speech on social media in recent years, contributing to increased violence and discrimination against Islamic communities (Ghasiya & Sasahara, 2022). Thus, as a religious minority, Muslims have been targeted by online hate speech on social media, and online platforms have become a breeding ground for the spread of Islamophobic hate speech.

Additionally, the international community’s prejudice against Islamic groups may contribute to unwarranted feelings of disgust or fear toward Islam. For example, after the September 11 attacks, social media was flooded with rhetoric that demonized Islam as terrorist and extremist, characterizing Muslims as inherently and culturally inclined to violence, which had a significant impact on the mental health and well-being of Islamic citizens around the world (Awan, 2023). Terrorist attacks in the name of Islamism have caused social panic and physical harm, leading to the formation of stereotypes of Islam that closely associate the religious group with radical, risky, and violent traits (Williams et al, 2020). Social media has also resulted in widespread negative portrayals and even offensive statements about Islam, including some undifferentiated attacks and intimidation, perpetuating discrimination, hostility, and even online persecution of this group.

Source: The Guardian

Hate Speech in Popular Culture

With the development of social media culture, expressions of hate speech are no longer limited to traditional textual descriptions; it is existing and permeating the pop culture elements of digital platforms in subtle ways (Aguilera, 2019). Hateful memes are a novel form of hate speech expressions in social media, usually consisting of a composite of text and images, and convey messages that disparage and ridicule specific groups of people in an implicit or rhetorical manner (Chen & Pan, 2022). Besides, memes are usually composed in the form of pictures with a general statement at the top and a “humoristic” twist at the bottom. As a result, users are not required to struggle to decode the meaning of memes, and their “humoristic” nature makes them less likely to be ignored by the audience and even to influence their awareness and impression of the group.

However, the hidden meanings behind the “humanistic” memes are ridicule and denigration, and the humor itself is in fact difficult to combine with hate speech and extreme linguistic prejudice. Aguilera’s study(2019) analyzed anti-Muslim memes. The formation and meaning of memes can be attributed to stereotypes of the Islamic community, including the oppression of Muslim women, the inherently violent nature of Muslim men, and the aggressive nature of the religion. Thus, Islamophobic memes may be utilized by discriminators in specific contexts to portray ideologies of religious hatred and to create hatred and hostility towards the Islamic community in a digital platform environment. These memes with discriminatory or derogatory connotations have become metaphorical devices that are often used in hate speech environments (Chen & Pan, 2022). In addition, images are more likely to impress users, and memes have thus become an effective means of spreading hate speech.

Source: CARE white paper series

Significantly, hateful memes are easier to reproduce and spread as a product of popular culture, and religious or racist people attempt to express irrational hostility toward Islamic groups through memes as a seemingly humorous form of ridicule (Ghasiya & Sasahara, 2022). Since memes have the quality of being freely shared and modified and are not subject to copyright, their reach, and dissemination are unimaginable and unpredictable (Aguilera, 2019). Moreover, the implications of the words and images in memes themselves can be ambiguous and the meanings behind them are sometimes more elusive to define, which may create difficulties for artificial intelligence and algorithmic models in identifying. Therefore, when Islamophobia memes are downloaded and shared by users in large numbers, the influence of the ideology they express is also expanding, posing a considerable challenge to the regulation of hate speech on social media platforms.

The Negative Impact of Hate Speech on Islamic Communities

Worryingly, social media discourse promotes and amplifies negative representations of the Islamic community as well as religious stereotypes. Ghasiya and Sasahara (2022) found that religious discriminators use social media platforms as a weapon to spread hate speech, constructing the Islamic community as a symbol of evil and terror and increasing social tensions. Much of the Islamophobic hate speech claims that Islam has dangerous or unwelcome characteristics, attributing the negative phenomena to the religious group as a whole (Khan & Phillips, 2021). While the rhetorical attacks focus on Islam, the clear implication is that members who practice this religion are dangerous and should be treated with contempt or hatred.

Furthermore, the expansion and spread of such hate speech can lead to the Islamic community facing social exclusion and discrimination, and difficulty integrating into the local society while even becoming marginalized, negatively impacting their education, employment, and socialization. More seriously, studies have shown that the spread of anti-Islamic hate speech threatens the social identity of the group and that negative stereotypes of Muslims can lead to low self-esteem and bring self-denial and depressive thoughts to those affected (Leets, 2002). Thus, hate speech against Muslims essentially violates and threatens the human rights of the group.

Source: Andrew Harnik, AP Photo

While hate speech negatively affects and harms the Islamic community, digital platforms lack effective governance of online hate speech. For example, in 2021, Muslim Advocates, a civil rights organization focused on discrimination against Muslims in the United States, sued Facebook for failing to combat anti-Muslim hate speech, alleging that the lack of regulation of hate speech on the Facebook platform has led to the prevalence of anti-Muslim speech and awareness of bias (Elfenbein, 2021). In addition, the Center for Countering Digital Hate (2022) reported that social media platforms failed to act on 89% of anti-Muslim hate posts and that these pages were filled with anti-Muslim hate speech, including Muslim conspiracy theories, dehumanizing and negative portrayals of Muslims.

Facebook’s Platform Regulation and Challenges

As the scale and impact of hate speech have been amplified by modern communication technologies and gradually focused on the Internet context, it has become an increasingly pressing issue for regulators and digital platforms (Khan & Phillips, 2021). The fact that social media is not regulated by a single country and has the characteristics of rapid dissemination and wide reach makes it a great challenge for social media platforms to regulate hate speech (Sinpeng et al, 2021). In addition, the decentralized nature of hate speech makes it difficult for platforms to effectively regulate the content, and the risk of legitimate speech being removed at the same time as regulation makes it difficult for platform regulation to strike a good balance between freedom of expression and hate speech. Therefore, as one of the world’s largest social media platforms, Facebook is expected to take responsibility and establish platform guidelines to avoid the negative impact of hate speech. Meanwhile, Facebook’s online governance needs to focus on the demands of vulnerable populations, such as the Islamic community, to be free from hate speech and discrimination.

Source: Michael Dwyer, AP Photo

To protect the image of the Islamic community on digital platforms and in public discourse, Facebook has taken steps to address Islamic hate speech. Facebook Community Standards clearly define the content of hate speech in policy, detailing prohibited textual and visual forms based on different protected characteristics, including hate speech against Muslims. In addition, Facebook’s partners detect and report the phenomenon of hate speech on the platform and use manual auditing, data analysis, and network detection to collect information related to hate speech and provide Facebook with relevant data and specific recommendations for improvement (Allan, 2017). Although the use of artificial intelligence to screen for hate speech governance has been effective, the screening system works on the basis of banned terms previously stored in the database. If the content is uploaded that circumvents such terms, the system has difficulty detecting and identifying them (Giansiracusa, 2021). Therefore, Facebook has established a content review team and hired content reviewers worldwide to manually review relevant content to improve the accuracy and efficiency of the platform’s review of hate speech.

However, with the global expansion of social media users, manual review teams are not sufficient to deal with the proliferation of new types of hate speech. While Facebook has established an anti-hate speech policy, the sheer volume of user-generated content makes it difficult for the platform to enforce the policy well. Besides, the complexity and variability of hate speech exacerbate the difficulty of monitoring and vetting the platform. Since assessing the meaning of speech needs to be done in the context of post content, the manual review also makes it difficult to reliably distinguish the boundaries of hate speech and can only be based on specific text and images (O’Regan, 2018). According to the International UNESCO Countering online hate speech (2015), online hate speech is persistent and mobile compared to traditional forms of hate speech carriers, which greatly increases the difficulty of governance. Therefore, with the development of popular culture on the Internet, the complexity of online language, the limitations of algorithms, and the constant birth of emerging types of speech make it impossible for artificial intelligence algorithms to accurately identify hate speech, posing a challenge for the regulation of social media platforms such as Facebook.

Conclusion


As the problem of online hate speech becomes more serious, people such as Islamic groups have become victims of hate speech online, leaving social media platforms facing complex regulatory issues. In the Internet era, platform companies are responsible for creating a safe, transparent, and fair online communication environment for their users by formulating hate speech policies and using algorithms and human review. In addition, social media platforms are also expected to contribute to regulating the digital ecosystem of anti-Islamic hatred and maintaining the rights and identities of relevant people on digital platforms. Therefore, digital platforms still have a long way to go to address the issue of hate speech in a dynamic speech environment and complex cultural and social context.

References:

Aguilera, C. (2019). Memes of Hate: Countering Cyber Islamophobia. Retrieved from https://www.fairobserver.com/world-news/cyber-islamophobia-memes-hate-spee ch-muslims-news-19112/

Awan, O. (2023). Hate Speech And Its Effect On The Health Of Muslim Americans. Retrieved from https://www.forbes.com/sites/omerawan/2023/02/01/hate-speech-and-its-effect-o n-the-health-of-muslim-americans/?sh=103d81944778

Chen, Y. & Pan, F. (2022). Multimodal detection of hateful memes by applying a vision-language pre-training model. PLoS One, 17(9).

Elfenbein, C. (2021). Suit seeks to limit anti-Muslim speech on Facebook but roots of Islamophobia run far deeper. Retrieved from https://theconversation.com/suit-seeks-to-limit-anti-muslim-speech-on-facebook- but-roots-of-islamophobia-run-far-deeper-159418

Ghasiya, P. & Sasahara, K. (2022). Rapid Sharing of Islamophobic Hate on Facebook: The Case of the Tablighi Jamaat Controversy. Social Media + Society, 8(4).

Giansiracusa, N. (2021). Facebook uses deceptive math to hide its hate speech problem. Retrieved from https://www.wired.com/story/facebooks-deceptive-math-when-it-comes-to-hate-s peech

Khan, H. & Phillips, J. L. (2021). Language agnostic model: Detecting Islamophobic content on social media. In Proceedings of the 2021 ACM Southeast conference, 229–233. 

Leets, L. (2002). Experiencing hate speech. Perceptions and responses to anti-Semitism and antigay speech. Journal of Social Issues 58: 341–361.

Williams, M.L. et al. (2020). Hate in the Machine: Anti-Black and Anti-Muslim Social Media Posts as Predictors of Offline Racially and Religiously Aggravated Crime, The British Journal of Criminology, 60 (1), 93–117.

Obermaier, M., Schmuck, D., & Saleem, M. (2021). I’ll be there for you? Effects of Islamophobic online hate speech and counter speech on Muslim in-group bystanders’ intention to intervene. New Media & Society, 0(0). 

O’Regan, C. (2018). Hate Speech Online: an (Intractable) Contemporary Challenge? Current Legal Problems, 71(1), 403–429.

Sinpeng, A. (2021). Facebook: Regulating Hate Speech in the Asia Pacific. Department of Media and Communications, The University of Sydney.

United Nations. (2023). International Day to Combat Islamophobia. Retrieved from https://www.un.org/en/observances/anti-islamophobia-day?gclid=EAIaIQobChM I1uqQppGk_gIVyyFgCh3ctAbeEAAYASAAEgIHvPD_BwE

Be the first to comment

Leave a Reply