4 April 2024 Analysis by Yiqian Xu (Oliver)
Freedom of expression, hate speech, and online harm in the context of the Israeli-Gaza Conflict.
Nowadays with social media popularization and instant dissemination of information, freedom of speech is enshrined by netizens as the supreme truth and the fundamental right to express their will freely.
This freedom empowers people to explore, receive, and share diverse information and perspectives, allowing us to express who we are, experience and understand diverse worldviews online and work together to build a society where everyone can freely express their beliefs (Flew, T., 2021).
However, the existence of hate speech not only challenges this truth but also seriously threatens the nature of that freedom. Hate speech generally refers to incitement to hatred and hostility against a particular group based on identity characteristics such as race, ethnicity, gender, religion, nationality, or sexual orientation (Parekh, B., 2012., p. 40).
This type of speech pushes the boundaries of the expression of will and invades the rights of others.
With the realization of the vision of the Declaration of Independence in cyberspace (Barlow, J. P., 1996) and the uncontrolled rapid development of social media, hate speech has accelerated the generation and spread of hate speech, triggering wider harm in cyberspace. Especially in the long-term and complex context of the Israel-Gaza conflict, social media has become a breeding ground for hate speech and online harm, which has an extremely negative impact and threat to individuals, groups, societies, and countries.
Online radicalism and incendiary content are more to the media’s “taste.”
The algorithmic design of social media platforms tends to prioritize content that triggers strong emotional responses and user interactions (e.g., comments, shares, and likes) (Massanari, A., 2017), which reinforces the spread of inflammatory content and the rise of online radicalism.
At the height of the Israel-Gaza conflict, inflammatory content such as hate speech, threats of violence, and celebrations of death and destruction exacerbated tensions within both countries and shaped understanding and emotional responses to the conflict in the region globally (Stella, M et al., 2018).
For example, in the social media war during the 2023 conflict in Gaza, both Israel and Hamas used TikTok and X to promote their views, posting images and videos of the battlefield to inspire emotional support among their own people.
The dissemination of this content not only provoked a strong emotional response among their respective supporters but also influenced the international community’s perception of the conflict (Marianna S., 2023).
In addition, the rise of online radicalism, which often adds itself in the spread of extreme ideologies on social media platforms, incitement to violence, and the recruitment of supporters against hostile opponents across the globe (Barassi, V., 2016), adds new complexities to conflict, challenges and questions about how social media companies should manage such content.
For example, on January 6, 2021, the attack on the U.S. Capitol in an unsuccessful attempt to overthrow Donald Trump resulted in the banning of the accounts of Trump and his supporters on Twitter, Facebook, Instagram, Snapchat, and YouTube for “civic integrity” and election misinformation (James, l., 2021).
The “trump card” in the media information war — fake news, intensifies online Conflict.
Information warfare has become an integral part of regional conflicts, in which opposing parties manipulate politics and public opinion and distort public perception by disseminating exaggerated or even fictitious reports, including unverified, inaccurate, or artificially manipulated images, videos, news, and statements.
The aim is to influence the perception of oneself, the other side, and the international community, to undermine the credibility of the adversary, to inflict psychological damage on the adversary, and to strengthen one’s own moral superiority, thereby exacerbating antagonism and inciting hatred (Libicki, M. C., 1995).
For example, during the 2023 conflict on the Israeli-Gaza border, social media was flooded with reports and videos of excessive use of force by the Israeli army and violent acts by protesters, many of which proved to be misleading, edited, or taken out of context to exacerbate emotions (Shannon B., 2023).
The disinformation and propaganda sparked a strong emotional response from both communities, further fueling tensions.
The prioritization of interests and more wanton ideological dominance
Social media platforms play a complex role in the online of the Israel-Gaza conflict, both as a tool for disseminating information and as a battlefield for ideological struggle.
This is particularly evident when dealing with conflict-related hate speech and online harm. Social media companies’ policy enforcement when dealing with hate speech and online harm is often driven by their commercial interests.
To maintain their user base and market share, these companies may be slow to remove inflammatory content and accounts (Corvelo, S et al., 2024), or tolerate controversial speech to a certain extent, in part reflecting a priority of profit.
For example, in 2018, Facebook was criticized for allowing the dissemination of inflammatory content related to ethnic cleansing in Myanmar on its platform, exposing that commercial interests can sometimes take precedence over social responsibility when dealing with hate speech (Sinpeng, A et al., 2021).
In terms of the ideologically dominant, social media content is often based on specific political or religious ideologies and is intended to exacerbate antagonism and hostility between the two sides.
At the same time, the content of disinformation and propaganda spread in conflict is often designed to support the views of one side or the other, ignoring the truth, and is fundamentally aimed at advancing a particular political or religious agenda (Williams, R. H., 1996).
In addition, cyberattacks on activists, journalists, and civilians are often aimed at silencing voices that contradict their own ideology, demonstrating the exclusion and suppression of “hostile” views under ideological dominance.
For example, during the 2023 conflict in the Gaza Strip, social media was filled with political slogans and images from both sides designed to mobilize support, emphasize their victimhood, and disparage each other (Marianna S., 2023).
The materialization of cyber harm further amplifies issues related to psychological trauma and privacy protection.
In the context of the Israel-Gaza conflict, social media has not only become a platform for the spread of hate speech and online harm but has also further amplified psychological trauma and raised serious privacy protection concerns.
Individuals targeted for attack, including activists and journalists, may experience extreme stress, fear, and feelings of isolation. Long-term exposure to hostility and aggression online can lead to mental health issues such as anxiety, depression, and post-traumatic stress disorder (PTSD) (Pluta, A., 2023).
At the same time, with the growing role of social media in conflicts, the issue of privacy protection has become particularly prominent.
Cyber attackers (also known as keyboard warriors) often threaten and harass by collecting and exposing private information (e.g., home address, phone number, etc.) about the targeted individual, a practice known as “doxing” (Douglas, D. M., 2016).
This is not only an invasion of personal privacy but can also be directly threatened and harmed by the real world. For example, a singer suffered a sustained cyberattack after she expressed her support for Gaza via Twitter during the conflict, which put her under extreme mental stress for a long time.
The safety of her and her family was seriously threatened, including receiving threatening DMs full of violent imagination, seeking mental health support, and having to turn to the police for protection.
Subsequently, she reported receiving threatening phone calls and discovering that someone had left threatening letters in front of her home, a serious violation of her privacy and a direct threat to her and her family’s safety (Nathan, B & PA, M., 2024).
Strategies to deal with the dangers of the online battlefield of the Israel-Gaza conflict.
Addressing online harms in the Israeli-Palestinian conflict requires a multi-dimensional, cross-sectoral approach by social media platforms, national governments, international organizations, and the public to reduce the spread of hate speech and online harm.
Social media platforms need to strengthen content moderation and combine advanced technology and manual moderation to improve the efficiency of identifying and filtering out seditious content, hate speech, and disinformation.
This includes establishing clear, specific community guidelines and publicly explaining them to users to define toxic speech and promote self-restraint among users.
At the same time, platforms need to strengthen content monitoring and censorship mechanisms, using AI and algorithmic technologies supplemented by manual review to ensure accurate and humane handling of harmful content.
In addition, increasing the transparency of policy implementation and encouraging public participation in reporting are key to jointly safeguarding cybersecurity, and platforms should regularly publish reports on addressing hate speech and online harm to enhance social trust and responsibility.
Educate users to identify disinformation and hate speech to improve public media literacy, including through the implementation of educational programs and public outreach, and provide the necessary tools and resources to help verify the source and authenticity of information.
At the same time, by promoting cross-cultural understanding and dialogue, misunderstandings and prejudices are reduced, and mutual understanding between users from different backgrounds is enhanced.
Governments and schools should provide educational resources and training to enhance the public’s critical thinking skills, thereby strengthening their ability to critically receive and process information online.
Governments and international organizations have a key role to play in tackling hate speech and online harms by establishing legal boundaries while protecting freedom of expression through legislation and policy and strengthening regulation of social media platforms to take precautions.
In the face of the transnational nature of social media, international cooperation has become particularly important, requiring the establishment of common standards and mechanisms, as well as the coordination of national actions through international organizations, and the sharing of coping strategies and best practices.
In addition, providing support and protection to victims, including counselling and legal aid, is also one of the responsibilities of governments and international organizations to ensure that online hate speech is effectively combated.
Partnering with community leaders through social media influencers is key to promoting positive and peaceful content messages, using their influence on spread positive messages.
In addition, raising public awareness is critical to raise public awareness of online hate speech and online harms through education and awareness campaigns, while encouraging the public to actively report and counter hate speech and disinformation online.
Supporting multicultural and inclusive activities, providing necessary support to victims, and encouraging the public, especially young people, to actively participate in the fight against online hate are important strategies to build social consensus and reduce online harm.
Conclusion
In today’s social media age of information explosion, freedom of expression is highly valued, but it also comes with hate speech and online harm. Especially in the context of the Palestinian-Israeli conflict, the spread of hate speech and the breeding and spread of online harm are driven by a variety of factors, which have a negative impact on both individuals and society.
Not only does this challenge the boundaries of free speech, but it also raises serious concerns about privacy and mental health. Therefore, social media platforms, governments, and international organizations, and the public need to adopt a multi-domain and cross-domain comprehensive strategy to curb the spread of these negative impacts.
Reference
Flew, T. (2021). Hate Speech and Online Abuse. In Regulating Platforms (pp. 91-96). Cambridge: Polity. (pp. 115-118 in some digital versions)
Parekh, B. (2012). Is there a case for banning hate speech? In M. Herz and P. Molnar (eds), The Content and Context of Hate Speech: Rethinking Regulation and Responses (pp. 37–56). Cambridge: Cambridge University Press.
Sinpeng, A., Martin, F. R., Gelber, K., & Shields, K. (2021). Facebook: Regulating Hate Speech in the Asia Pacific. Department of Media and Communications, The University of Sydney. https://hdl.handle.net/2123/25116.3
Barlow, J. P. (1996). A Declaration of the Independence of Cyberspace. https://www.eff.org/cyberspace-independence
Massanari, A. (2017). Gamergate and The Fappening: How Reddit’s algorithm, governance, and culture support toxic technocultures. New Media & Society, 19(3), 329–346. https://doi.org/10.1177/1461444815608807
Marianna Spring. (2023, November 27). Slick videos or more ‘authentic’ content? The Israel-Gaza battles raging on TikTok and X. BBC disinformation and social media correspondent. https://bbc.com/news/business-67497299
Stella, M., Ferrara, E., & De Domenico, M. (2018). Bots increase exposure to negative and inflammatory content in online social systems. Proceedings of the National Academy of Sciences, 115(49), 12435-12440. https://doi.org/10.1073/pnas.1803470115
Barassi, V. (2016). Datafied citizens? Social media activism, digital traces and the question about political profiling. Communication and the Public, 1(4), 494-499. https://journals.sagepub.com/doi/full/10.1177/2057047316683200
Libicki, M. C. (1995). What is information warfare?. https://apps.dtic.mil/sti/tr/pdf/ADA367662.pdf
Shannon Bond. 2023, October 10. Video game clips and old videos are flooding social media about Israel and Gaza. NPR NEWS. https://www.npr.org/2023/10/10/1204755129/video-game-clips-and-old-videos-are-flooding-social-media-about-israel-and-gaza
Corvelo, S., Kelly, P., & Perreault, S. (2024). Frances Haugen, Facebook Whistleblower. SAGE Publications: SAGE Business Cases Originals. https://doi.org/10.4135/9781071946190
Pluta, A., Mazurek, J., Wojciechowski, J., Wolak, T., Soral, W., & Bilewicz, M. (2023). Exposure to hate speech deteriorates neurocognitive mechanisms of the ability to understand others’ pain. Scientific Reports, 13(1), 4127. https://doi.org/10.1038/s41598-023-31146-1
Nathan Bevan & PA Media. (2024, March 13). Charlotte Church says family threatened over Gaza support. BBC News. https://bbc.com/news/uk-wales-68547160
Douglas, D. M. (2016). Doxing: a conceptual analysis. Ethics and information technology, 18(3), 199-210. https://doi.org/10.1007/s10676-016-9406-0
James landale. (2021, January 8). Capitol siege: Trump’s words ‘directly led’ to violence, Patel says. BBC NEWS. https://bbc.com/news/uk-politics-55571482
Williams, R. H. (1996). Religion as political resource: Culture or ideology?. Journal for the Scientific Study of Religion, 368-378. https://heinonline.org/HOL/P?h=hein.journals/margin6&i=121
Be the first to comment