Cleaning up the dark side of the digital world

Hate speech online: Cleaning up the dark side of the digital world



In our connected digital age, the internet was designed to be a world of free speech. As John Perry Barlow said in 1996: “Anyone, anywhere can express his or her beliefs, no matter how unique, without fear of being forced into silence or obedience”. However, with the widespread use of the Internet comes a dark side to digital communication. Online harm and hate speech spread across the online world, with significant consequences for individuals and society as a whole.

The popularity of the Internet has made information dissemination more convenient, and people can easily communicate and share opinions with people around the world. At the same time, however, as online communities expand, we also see an increase in online harm and online hate speech. These issues not only pose serious threats to individuals’ mental health and social harmony but also challenge how we understand and respond to free speech. A clear example is hate speech targeting specific groups such as the LGBTI community. “Can lead to exclusion and repression based on sexual orientation” (Polat et al., 2023).

This article will take an in-depth look at online harm and the impact of online hate speech and explore how to combat these issues. Propose strategies to build a healthier and more inclusive digital society by examining real-life cases.

What is online harm and online hate speech

      While the internet was once envisioned as a utopian world where everyone could express their beliefs, the erosion of hate speech and online harassment paints a different picture. Online harm and hate speech generally refer to the dissemination of behavior or speech on Internet platforms that harms individuals, groups, or society. For example, insults and derogations based on an individual or group’s culture, nationality, gender, sexual orientation, appearance, etc.

According to Brown’s (2018) theory, hate speech usually has the characteristics of anonymity, invisibility, community, instantaneousness, and harm. This means that the anonymity provided by the Internet enables individuals to be bolder without revealing their true identity. Speech that expresses extremism and hate. Online hate speech often takes place in a virtual space where there is a physical distance between the speaker and the audience. Without seeing the emotional harm caused by their remarks or the disapproval of others, perpetrators of hate speech may underestimate the harm caused by their remarks. It can spread widely without oversight or intervention. Hate speech is often spread and accepted within specific online communities, which may be made up of like-minded people who share similar biases and positions. Finally, the Internet provides the ability to publish information instantly, allowing hate speech to quickly spread to a wide audience and avoid algorithmic screening and supervision by relevant agencies.

The dark side of real life

Online harm and online hate speech lead to various negative consequences such as hate crimes in the real world. According to research by Lupu (2023) et al., extremists use social media platforms to radicalize individuals and organize offline violent incidents, emphasizing that online the seriousness and worsening of hate speech. Hate speech on social media often targets specific ethnic, religious groups or sexual orientations, inciting violence, and discrimination against these groups. This kind of hate speech can lead to real-life violence and attacks, causing physical and psychological trauma to victims, while also exacerbating social instability and division. For example, in 2019, two mosques in Christchurch, New Zealand, were attacked by terrorists. The attackers had posted hate speech and racist content on social media in advance, inciting his supporters of extremist ideas to carry out violent attacks on Muslims. This hate crime resulted in more than 50 deaths and dozens of injuries.

On some social media platforms, users often post hate speech targeting specific ethnic groups, including racial slurs, racism, and racial hatred. Such remarks not only disrupt the harmonious atmosphere of online social communities, but also exacerbate tensions between different races. Hate speech against the LGBTI community is widespread. “The existence of online hate speech not only disrupts online social communities, but also raises concerns about social and public safety” (Cao et al., 2020). Those may lead to homophobia and sexual orientation discrimination in real life, resulting in such groups being continuously excluded and suppressed in society. These cases show that the existence of such speech poses a serious threat to the stability and diversity of society, and effective measures need to be taken to contain and combat it.

Has Facebook become a conduit for hate speech?

According to research by Schmidt and Wiegand (2017), Facebook has become a platform for the proliferation of hate speech. Researchers conducted a content analysis of posts, comments, and pages on Facebook to determine the type, target, and mode of dissemination of hate speech. They found Most hate speech is directed at individuals based on various characteristics such as race, behavior, physical characteristics, sexual orientation, class, and gender. These speeches often include derogation, discrimination, threats, and insults, which seriously affect the mental health and well-being of the victims. Social status. Moreover, hate speech on Facebook has not only shown a surge in quantity, but also the content involved has become increasingly diverse and worsened. The main reason for this is that the social media platform does not take effective enough action to prevent the spread of hate speech.

Beyond this, Facebook faces challenges in identifying and limiting hate speech. As Paasch-Colberg et al. (2021) argue, this is due on the one hand to the broad protection of free speech and on the other hand to the complexity of multimodal hate speech sex. With the continuous development of information technology, the forms of hate speech have also become diverse. For example, in the form of text, pictures, videos, etc. This makes it difficult for Facebook to respond effectively. Its user base is large and widely distributed, which also increases the difficulty of effective supervision of hate speech.

How Facebook navigates Hate Speech

To effectively address these challenges, we must first understand the specific content of hate speech and online harm. Sinpeng et al. define hate speech as more than simply offending or hurting feelings; it can cause immediate and long-term harm to marginalized groups. It can be said that platforms such as Facebook play a vital role in moderating content and mitigating harm online. Fortuna and Nunes (2018) found that efforts to combat online hate speech include automated detection systems using machine learning and deep learning models.

In response to this problem, Facebook has taken corresponding measures, such as strengthening content review and adding reporting mechanisms. Through automated detection systems using machine learning and deep learning models. The Facebook platform can quickly screen for toxic information and remove it quickly. Additionally, Facebook has created features that allow users to provide feedback and reports. This can increase the supervision of hate speech and detect and respond to violations promptly. In addition, relevant government departments have also implemented new legislation and supervision on Facebook. For example, the government requires social media platforms such as Facebook to be more transparent in handling user data by enacting stricter data privacy laws. In addition, relevant departments will also conduct regular reviews and evaluations of the algorithm design and operation of social media platforms.

Is regulation correct?

In the Western media world, policing users’ speech has always been controversial. Because some people think this goes against free speech and the original intention of the Internet. The Internet should be a platform for free speech, allowing users to express various opinions and views. Once the algorithms of social media platforms are too strict, then many of their texts and images may be restricted from publication. In addition, some people even think that “There is a conflict of interest between things that are beneficial” (Frances Haugen, testimony before U.S. Congress, 4 Oct. 2021). This means Facebook may focus more on its interests, such as making more money, rather than ensuring the security of the platform and the well-being of its users. Take political campaigns in the United States as an example.  

According to research by Nwozor et al. (2022), the use of Facebook during political campaigns leads to supporters and voters spreading hate speech. During some political campaigns, supporters may post on Facebook against specific Ethnic discriminatory remarks. Additionally, people who support different political parties or candidates may post offensive comments on Facebook in an attempt to discredit their opponents or their supporters. “Journalists are considered key actors in spreading hate speech on Facebook during political events” (Putri et al., 2022). This will create a situation where publishers of news and regulators of social media platforms are motivated by interests driven to control the flow of speech. This will lead to social instability and division. Therefore, social media platforms like Facebook need to balance the relationship between freedom of speech and social responsibility. This may involve establishing an effective content review mechanism, strengthening user Measures such as education, strengthening punishment for violations, etc.


The Internet is often closely associated with freedom, but issues surrounding hate speech in today’s digital environment require careful consideration. These discriminatory remarks against individuals or groups spread on the Internet. It is often associated with anonymity and distance and is spread over time. It is worth noting that this online violence will also extend to real life. For example, extremists use social media to radicalize individuals and incite violence against marginalized groups. We have to admit that social media platforms such as Facebook, due to their huge user base and the complexity of their mechanisms, have become the territory for some criminals to spread hate speech. As people pay more attention to this issue, we can find that Facebook has taken the lead in implementing measures such as automatic detection systems and reporting mechanisms to combat hate speech. Balancing supervision and maintaining freedom of speech is a key issue that the current self-media needs to solve. Because strict supervision may restrict users’ behavior. Too much focus on corporate interests instead of user safety will also cause people to lose trust and confidence in the Internet. Addressing online harm and hate speech requires a multi-faceted approach. It is important to educate users to identify and counter hate speech. The Internet requires the joint efforts of all parties in society.

Be the first to comment

Leave a Reply