Hate Speech & Online Harm

What is ‘Hate speech’?

From United Nations, it says ‘hate speech’ is offensive speech directed at a group or individual based on inherent characteristics, such as race, religion or gender, that could threaten social peace. To provide a unified framework for the UN to address the issue globally, the UN Hate Speech Strategy and Plan of Action defines hate speech as… ‘Any form of speech, writing, or conduct those attacks or uses derogatory or discriminatory language communicates references to individuals or groups based on their identity, in other words, their religion, ethnicity, national origin, race, colour, descent, gender, or other identity factors.’ However, there is no uniform definition of hate speech in international human rights law. The concept is still being debated, especially in the areas of freedom of opinion and expression, non-discrimination and equality.

Hate speech always shows up on every online social community and public network platform. And it can be conveyed through any form of expression, including images, cartoons, memes, objects, gestures and symbols, and can be spread offline or online. The original free speech vision of the internet by John is ‘We are creating a world where anyone, anywhere may express his or her beliefs, no matter how singular, without fear of being coerced into silence or conformity’. I usually understand speech as a way for us to express our idea to others or explain a reason why we use this method to solve a problem, but Parekh defined speech as ‘expresses, encourage stirs up or incites hatred against a group of individuals distinguished ethnicity, gender, religion nationalist, or sexual orientation.’ (Parekh, 2021, p40) we can see from Parekh’s definition, it evokes real or perceived “identity factors” of an individual or group, including: “religion, race, national origin, race, colour, ancestry, gender,” but also language, economic or social origin, disability, health status, or sexual orientation, among others and the speech itself is not necessarily be violent or emotive, or to lead to public violence in and of itself to be hate speech. Some speech we saw on the internet, offend someone or hurts someone’s feelings, but there is a kind of hate speech that can harm people immediately. Because of this, these kinds of content need restrictions and regulations. And we found that hate speech comes from a kind of people that they think are a marginalized group of people. They want to prove themselves on social media by post hate speech as a way to speak out their opinion about something. And need to preserve freedom of expression from censorship by States or private corporations’ is often invoked to counter efforts to regulate hateful expression, in particular online.

Hate speech affects who?
‘88 per cent of survey respondents have seen examples of racism towards Indigenous people on social media. One survey respondent explained the many forms that this can take: ‘Often the comment section of a news article on Aboriginal people is the worst.’ (Carlson, B. & Frazer, R. 2018) From this interview, we can see that online social media and news both will have a racist problem. Because content creates by humans and this will be easy to start from subjective opinion and it is hard to be neutral. If this happened on news, then the company needs to review itself and consider the content editor about fair content. If news starts to look like hate speech, then the public will affect by it. And easy to produce pro- and anti-side conflict. So, all hate speech is based on a certain point of view. These views may be wildly incorrect, or they may be correct, but they use an extreme expression. It can hurt feelings, it can make people emotional on both sides, it can lead to online abuse, and sometimes it can escalate into real-life incidents like shootings. The basic point of view is that the two people have different opinions and emotions and cannot solve the problem with reason.

Different kinds of hate speech
Hate speech is often accompanied by online harm. And there are many kinds of it. Such as cyberbullying, adult cyber abuse, image-based abuse, and illegal and restricted online content. This content can be easily found on the internet. Because this easily found negative content may lead to some social problems, the supervision of the platform also needs to be changed to some extent. For the content itself, platform supervision should be in a neutral attitude, that is, neither positive nor negative. However, as the content will appear wrong extreme remarks, then the platform should be banned and deleted following the unified terms and regulatory policies of users. To prevent negative effects later on. Regulating this behaviour requires three levels of sophistication — the platform itself, the user, and the government (Terry Flew). Online harm affects a large number of people, and to some extent, children are most affected, especially vulnerable children. It can also have a big impact on adults, some of whom have self-harm or suicidal thoughts as a result. online harm now costs almost nothing. A lot of people just need a computer, whether it’s true or not, to say something about one person. After that, the spread will increase exponentially, and soon it will all be known. If it’s negative then the person is almost destroyed.

Here we take Facebook as an example, meta as a major company has set a series of rules for content publishing. This includes but is not limited to, hate speech. They are also based on the United Nations’ definition of content. Meta believes that such content creates an atmosphere of intimidation and exclusion, and in some cases advocates violence in the real world. On this basis, Meta understands that users sometimes post content with derogatory or hate speech to express condemnation or raise public awareness. However, the publisher needs to express the intention clearly, or the platform will remove all the prohibited content as well. There are three levels of banned content on Facebook, which are designed to protect everyone who has legal status. The first level of prohibited content is the expression of violent speech in the form of words or images or support for violence and comparison, generalization or false conduct in the form of a complaint. Generally speaking, it is the text or picture that will cause people to have negative feelings, which are prohibited on the platform. The second level focuses on the content of individuals or groups of people with protected characteristics, including physical defects, defects of care, moral defects, disparaging words and curses. Level 3 is any form of protection against individuals and groups of people. Acts inciting hard-line exclusion, political exclusion, economic exclusion and social exclusion are also not allowed on the platform. Meta has very strict regulations on the content of hate speech, but sometimes we still see some such content on the platform. We can find that Meta has tried its best to improve the existing network environment. Moreover, Meta’s transparency centre also notes that such statements are allowed if they are intended to condemn certain phenomena or raise public awareness. They can be published as long as the publisher makes it clear that they intend to do so without creating negative misunderstandings.

Double sides
I think in our world, free speech is not always a bad thing, it has two sides. Some problems need to come from the public. So, free speech needs to be in moderation, otherwise, it will become hate speech on the internet such as discrimination or abuse. Moderate speech can be considered as advice or speaking out opinion in an acceptable way. It has less aggressivity and people who saw this content are easier to read. If free speech is the opposite way, everyone posts very strong aggressivity content online, it will be a mess in the comment. Maybe others will post their content against their opinion. The most common example is racism, especially in America. We can read police news that make difference treat based on skin colour. Then on Twitter, there are more and more people post content based on skin colour and resulting in a long-term cyber fight about this topic. It even affects the real society in America, such as armed protests and shooting incidents even neighbourhood relationships getting worse. There is a concept called ‘platformed racism’ by Ariadna Matamoros- Fernández, it contains dual meanings that platforms as amplifiers and manufacturers of racist discourse and also describes a mode of unfair platform governance. Hate speech sometimes will be deleted by the platform, but some of them still can stay on the website. We can still look at the racism, the platform looks to prefer to delete some content with which they do not agree. The platform’s regulation policy needs to stay at a neutral position, but hate speech that has strong aggressivity needs to be banned on the platform. This concept challenge and query the platform’s position.

From online to real society
In our society, we have equal human rights and government needs to protect our rights. Every single human, whatever age, disability, gender or other. These are protected characteristics. While our rights are protected, we also need to fight for our rights. The fight is not to fight back by using language online, which would turn into online harm to publishers of content. We should stand up for ourselves from a practical and legal standpoint. You can express your opinions, but you need to watch your aggression. Even if you strongly disagree, you need to do so in a communicative tone. If it causes some people dissatisfaction or hurts their feelings, it can easily be elevated to hate speech.

Hate speech and online harm are pervasive issues in the digital age that have significant real-world consequences. Hate speech refers to any speech that is derogatory or discriminatory towards a person or group based on their race, gender, religion, sexual orientation, or other characteristics. Online harm, on the other hand, encompasses a wider range of negative behaviour that occurs online, such as cyberbullying, harassment, and trolling.

One of the most concerning aspects of hate speech and online harm is that they can have significant psychological and emotional impacts on their targets. Victims of hate speech and online harm may experience anxiety, depression, and even suicide ideation as a result of the negative messages they receive. Furthermore, online harm can also have real-world consequences, such as lost job opportunities, damaged reputations, and even physical harm.

In recent years, social media platforms have come under scrutiny for their role in allowing hate speech and online harm to thrive on their platforms. Many argue that social media companies have not done enough to combat these issues, and have failed to effectively enforce their policies against hate speech and online harm. As a result, many individuals and groups have called for greater regulation of social media platforms to ensure that they are held accountable for the content that is shared on their platforms.

In response to these concerns, many social media platforms have taken steps to address hate speech and online harm on their platforms. For example, platforms like Facebook, Twitter, and Instagram have implemented policies and tools designed to identify and remove hate speech and other harmful content. They have also increased their efforts to educate users about the dangers of hate speech and online harm, and have implemented reporting mechanisms that allow users to flag content that violates their policies. Despite these efforts, however, hate speech and online harm continue to be significant problems on social media.


atamoros-Fernández, A. (2017). Platformed racism: the mediation and circulation of an Australian race-based controversy on Twitter, Facebook and YouTube. Information, Communication & Society, 20(6), 930–946. https://doi.org/10.1080/1369118X.2017.1293130

Carlson, B. & Frazer, R. (2018) Social Media Mob: Being Indigenous Online. Sydney: Macquarie University. https://researchers.mq.edu.au/en/publications/social-media-mob-being-indigenous-online

Flew, Terry (2021) Regulating Platforms. Cambridge: Polity, pp. 91-96

John Perry Barlow (1996), ‘Declaration of the Independence of Cybers’ https://www.eff.org.cyber-indenpendence

Massanari, Adrienne (2017) #Gamergate and The Fappening: How Reddit’s algorithm, governance, and culture support toxic technocultures. New Media & Society, 19(3): 329–346.

Meta (2023) Hate speech, Transparency Center. Available at: https://transparency.fb.com/zh-cn/policies/community-standards/hate-speech/.

Roberts, Sarah T. (2019) Behind the Screen: Content Moderation in the Shadows of Social Media. New Haven, CT: Yale University Press, pp. 33-72.

What is hate speech? (no date) United Nations. United Nations. Available at: https://www.un.org/en/hate-speech/understanding-hate-speech/what-is-hate-speech

Be the first to comment

Leave a Reply