Challenges and Governance in the Digital Age: Hate Speech and Online Abuse on Social Media During the Palestinian-Israeli Conflict

What is hate speech and online abuse?

With the rapid development of the digital age and the popularization of Internet technology, hate speech and online abuse based on social media platforms have become increasingly rampant. This kind of online violence not only destroys the digital landscape but also creates serious challenges for the governance of the online environment.

Flew (2021) emphasized that hate speech and online harmful behavior on social media platforms will have varying degrees of negative impacts on individuals, communities, and even society as a whole. Hate speech refers to speech with offensive intent targeting specific aspects such as race, gender, religion, ethnicity, political stance, etc., including the most common racist remarks, homophobic & transphobic discrimination, and extremist speech. At the same time, hate speech is also defined as a form of “group-related misanthropy” (Zick et al., 2008).

Research demonstrates that online violence such as hate speech and online abuse is a form of cybercrime with malicious harassment intent, and will cause indelible harm to the victim’s mental health (Dreißigacker et al., 2024). This refers to the fact that negative emotions caused by verbal attacks will be transferred from online to offline spaces, leaving victims feeling uneasy and scared in real life outside the Internet, which further contributes to the spread of negative social impact. Besides, this phenomenon not only breaks the moral bottom line but also infringes on the parties’ rights of reputation, privacy, and many other legitimate rights and interests. The attack methods of some online abusers have even risen from rumors and slander to intimidation and threats to personal safety.

Therefore, cultivating the correct values ​​and moral responsibility of network users while effectively supervising and governing such cybercrime phenomena has become a primary issue in the context of digital communication.

Who started the social media war during the Israeli-Palestinian conflict?

In the all-out war between Palestine and Israel
, supporters of both sides have engaged in verbal confrontations on mainstream social media platforms such as X and Instagram, many of which contain hate speech to incite violence and personal attacks. Based on political opposition, people who openly support Israel on social media are called “Fascists” or “Zionists” by those who support Palestine, and those who support Palestine are called “Anti-Semites” by supporters of Israel.

Screenshot of LeBron James’s post (from Instagram)

Comments section for LeBron James’s post (from Instagram)

NBA superstar LeBron James publicly supported Israel in a post on Instagram on October 12, 2023, but his comment area quickly encountered online violence from Palestinian advocates (Jacob & Church, 2023). For example, some comments pointed out James’ black race with hostility and used insulting hate speech such as “coward” and “disgrace” to describe his behavior. Some extreme remarks believe that James supports Israel because he has never read a book, so he cannot distinguish between right and wrong, good and evil. In James’ comment area, users with opposite political stances sent targeted hate speech to each other and launched a war of online violence on social media.

Members of the Harvard Undergraduate Palestine Solidarity Committee interrupted speeches and held signs in a demonstration at Convocation (from The Harvard Crimson)

Nevertheless, some people who support Palestine on social media have also been subjected to hate speech and online violence from supporters of Israel. For example, Harvard student groups have publicly signed on to condemn Israel’s genocide. After Israel launched a full-scale war, Harvard students organized a Palestine solidarity committee to support Palestine and co-signed a statement titled “hold the Israeli regime entirely responsible for all unfolding violence” (Reeve, 2023).

Screenshot of Tweets by Lawrence H. Summers (from X)

Former Harvard University President Lawrence H. Summers issued a public statement on X on October 12, 2023, condemning the students’ support for Palestine, claiming he was ashamed of the students’ actions and criticized these people from a moralistic perspective. This has led to more users launching hate speech against the students on social media, with some calling them ”hateful racists”, ”Harvard’s Pests”, ”Harvard’s Shame”, and even sending racist slurs to Asian students.

Besides, supporters of Israel conducted human flesh searches online for Harvard students who supported Palestine, maliciously published private information such as the students’ home addresses or phone numbers, and even issued death threats to Muslim college kids. They have organized “doxxing trucks” incidents after posting hate speech on social media, which have elevated cyberviolence to the level of real-world harm (Thorbecke, 2023). This refers to the fact that in addition to verbally attacking Palestinian supporters on social media, Israeli supporters rented a billboard truck with images and names of Harvard students who support Palestine. The truck was parked in front of Harvard University with a banner referring to the students as “Harvard’s Leading Anti-Semites”.

“Doxxing trucks” (from The Harvard Crimson)

After the “doxxing trucks” incident, not only Harvard students were ravaged by online violence, but other students and faculty who spoke out for Palestine were subjected to varying degrees of online abuse and hate speech on various social media platforms. Evidence illustrates that the frequency of vicious online incidents increased significantly during the Palestinian-Israeli conflict, and the targets of the attacks are no longer just ordinary Internet users but involve many public figures (Brown, 2021).

Additionally, the survey demonstrates that hate speech is posted online to gain social approval among like-minded groups, and the social approval theory of online hate will incite violence and spread extreme emotions. Their subsequent messages were even more poisonous if the poster’s toxic posts received an unusually high number of likes from other users (Walther, 2023). This means that some people post hate speech on social media in order to find people with the same political stance or consensus in other terms and to fit into a specific group and achieve social acceptance with each other by collectively committing online abuse.

The impact of online violence

The harm caused by online violence, such as hate speech on social media and online abuse, is multifaceted and far-reaching:

From a psychological perspective, online violence can bring fear, anxiety, uneasiness, and other negative emotions to the target individual, and hateful information can easily induce depression, anxiety disorders, post-traumatic stress disorder (PTSD), and other diseases, thereby causing lasting harm to mental health and well-being.

From a physical perspective, cyberbullying can easily escalate into real-life harm. In this case, the Palestinian-Israeli conflict on social media turned into an incident of “doxxing trucks”. Israeli supporters use doxxing trucks to release private information of Palestinian supporters, thereby posing risks and threats of physical harm to these individuals.

In terms of perceptions, inflammatory language and derogatory comments on the Internet can exacerbate personal prejudices, and perpetuates harmful stereotypes and traditional stereotypes. In particular, racist, xenophobic, Islamophobic, and anti-Semitic remarks based on race, religion, etc. appeared many times in this conflict.

In terms of society, abusing the power of free speech to publish hateful information will erode users’ trust in social media platforms and weaken the sense of security of social groups in digital spaces. Meanwhile, it may reduce the digital literacy of online users and weaken the enthusiasm to express personal opinions and participate in communication, and it can even arouse social isolation and lead to the tendency of social marginalization of victims.

Social media platforms are great masterpieces of the digital age, but they can also become “accomplices” in spreading harmful information. The Israeli-Palestinian conflict represents the long-standing opposition between the two countries, the complexity and controversy behind it are the trigger of this social media conflict, while the opposing political positions espoused by supporters of both parties also mean that this is a highly polarizing issue.

Consequently, it has become imperative to address social media conflicts and curb the spread of hate speech and online abuse. In setting governance measures, it should not be limited to the Palestinian-Israeli conflict on social media, but the phenomenon of cyber-violence in social media should be the main target to consider and provide rigorous solutions from multiple perspectives.

Governance measures

The promulgation of strong laws and regulations can effectively control the phenomenon of online violence. This indicates that to use the support of the legal framework to clearly define the vicious incidents of hate speech and online abuse and to impose corresponding punishments according to the severity of the incident after characterization.

For example, on February 26, 2024, the Canadian government promulgated the text of Internet rules called “Bill C-63: the Online Harms Act” (Langevin, 2024). This bill further improves the previous “Bill C-11: the Online Streaming Act” that took effect on April 27, 2023 (Allie, 2023). In terms of administrative penalties, the bill imposes fines of up to $70,000 for users who maliciously post hate speech and online abuse and cause serious social impact. Furthermore, the social media service operators that violate this bill will also face a fine of 6% of the platform’s total revenue, or $10 million.

Legislation and policy play a significant role in the process of managing online violence, its main purpose is to strengthen the content censorship system on social media, formulate specific digital safety plans, and standardize and supervise citizens’ speech on the Internet. Unlike other regulatory agencies such as technology companies or civil community organizations, the government is more authoritative. As the setter of rules and standards, the government can more accurately identify online abusers and take tough measures to reduce the occurrence of such incidents.

Technology companies & Social media service operators
Technology companies and social media service operators play the role of executors in the governance of online violence phenomena such as hate speech and online abuse. Evidence illustrates that a large number of user-generated content platforms accurately detect and delete illegal content in social media by deploying specific algorithm auditing systems such as automated hash-matching, keyword filters, and predictive machines (Gorwa et al., 2020).

In addition, establishing independent regulatory organizations and deploying dedicated reporting mechanisms will help technology companies and social media service operators deal with hate speech and combat online harm.

Establishing a regulatory organization refers to establishing a manual review organization while using automatic review by algorithms, and ensuring that the negative impact of hate speech can be promptly identified and controlled through secondary review. Although automated review tools can analyze keywords in text to filter messages with offensive intent as well as identify harmful content in the form of images, videos, and other multimedia, they are unable to do so in a way that humanizes the understanding of some obscure hate speech.

Taking racist remarks as an example, some users do not directly use swear words when issuing targeted racially charged hate speech, but may start with “Is your hobby growing cotton?” “You must like eating fried chicken!” or other phrases to insult black people. It is difficult for the system to define hate speech through keywords such as “cotton” and “fried chicken”. When faced with a similar situation, the manual review organization can immediately determine that this is harmful information with racial discrimination and filter it.

Investing in the development and deployment of reporting mechanisms aims to encourage users to combat online harm and safeguard their rights and interests through reporting. If there is still harmful information that is missed after the manual review organization’s review, the reporting mechanism will become a “third review.” Platforms can give users the power to report harmful information that they believe has offensive intent and related content that violates laws or platform rules. Meanwhile, platform managers can provide feedback based on user reports and impose corresponding penalties on online perpetrators who maliciously publish hate speech. Depending on the severity, different levels of processing can be carried out, such as limiting the number of speeches, prohibiting speeches, or banning accounts.


In conclusion, the phenomenon of cyberbullying presents significant challenges to the governance of the digital environment, with wide-ranging implications for people, communities, and societies. In particular, hate speech and online abuse, which is intentionally offensive and targeted at specific demographics, including race, gender, religion, or political affiliation, is on the rise on social media platforms and is aggravating and polarizing online discourse.

The main objective of this article is to highlight the negative impacts of cyberviolence on digital audiences, which include harms to mental health, physical safety, perceptions, and societal cohesion. Based on the challenges, governance measures are proposed from multiple perspectives to promote respect, civility, and inclusiveness in the digital space through the concerted efforts of governments, technology companies, and social media service operators while achieving the goal of combating cyber-injury incidents and preserving the ecology of the digital environment.

Allie. (2023). Justice Centre’s Statement on Bill C-11, The Online Streaming Act, receiving Royal Assent. In CE Think Tank Newswire. ContentEngine LLC, a Florida limited liability company.
Brown, H. (2021). Celeb battles heat up on social media over Israeli, Palestinian conflict: This war is playing out on the battlefield of social media as well as on the streets of Israel and Gaza, and celebrities supporting both sides have spoken out, receiving both backlash and praise. The Jerusalem Post (Online).
Dreißigacker, A., Müller, P., Isenhardt, A., & Schemmel, J. (2024). Online hate speech victimization: consequences for victims’ feelings of insecurity. Crime Science, 13(1), 4–13.
Flew, Terry (2021) Hate Speech and Online Abuse. In Regulating Platforms. Cambridge: Polity, pp. 91-96 (pp. 115-118 in some digital versions)
Frank, S. (2023). Members of the Harvard Undergraduate Palestine Solidarity Committee interrupted speeches and held signs in a demonstration at Convocation. The Harvard Crimson.
Gorwa, R., Binns, R., & Katzenbach, C. (2020). Algorithmic content moderation: Technical and political challenges in the automation of platform governance. Big Data & Society, 7(1), 205395171989794-.
Jacob, L., & Church, B. (2023). NBA superstar LeBron James calls Hamas attacks on Israel ‘tragic and unacceptable.’ In CNN Wire Service. CNN Newsource Sales, Inc.
Julian, J. (2023). As Students Face Retaliation for Israel Statement, a ‘Doxxing Truck’ Displaying Students’ Faces Comes to Harvard’s Campus. The Harvard Crimson.
Reeve, E. (2023). Protest, fear and pride: US college students reflect on how they’re impacted by the Israel-Hamas war. In CNN Wire Service. CNN Newsource Sales, Inc.
Teesta, S. (2018). Understand what constitutes Hate Speech. Citizens for Justice and Peace.
Thorbecke, C. (2023). Names and faces of Harvard students linked to an anti-Israel statement were plastered on mobile billboards and online sites. In CNN Wire Service. CNN Newsource Sales, Inc.
Walther, J. B. (2023). Online “likes” for toxic social media posts prompt more − and more hateful − messages. The Conversation : Science + Technology.
Langevin, L. (2024). Canada’s Bill C-63: Online Harms Act Targets Harmful Content On Social Media. Mondaq Business Briefing.
Zick, A., Wolf, C., Küpper, B., Davidov, E., Schmidt, P., & Heitmeyer, W. (2008). The syndrome of group-focused enmity: The interrelation of prejudices tested with multiple cross-sectional and panel data. Journal of Social Issues, 64(2), 363–383.

Be the first to comment

Leave a Reply