Did you think about each word when you wrote it?
Let’s hear what she has to say

(Source:https://699pic.com/tupian/wangluobaoli.html)
Introduction
Prejudice can occur between different groups in a country or across different countries, resulting in hate speech. This poisonous discourse through social media can spread faster and eventually can cause harm and destruction. Hate crimes can eventually be unavoidable in the events of hate speech against a group of people affiliated with a specific group. Notably, hate speech is any attack on an individual’s race, ethnicity, or origin. Social media should be used as a breeding ground for hate, where some people are harassed or threatened due to their gender, race, or religion. Most online bullies lack social skills and are filled with a superiority complex. These people are ever doubting themselves and always want to feel strong or dominant over others. This paper explores hate speech and online harm by discussing why people hate online, the consequences of hate speech and online harm, and recommendations on how to solve this poisonous discourse.
Why do people hate online?
On every social media platform, there is always a racist or sexist comment about a person or a group of people. Such hate speech aims to threaten or hurt somebody else or to create enmity among different groups. This is a social issue of great global concern. According to Reichelmann and others (2020), hate crimes in the United States and Europe have been very high since 2017. Such crimes have been linked to extremist ideas and hate speech. Social media is the platform where extremists champion their course, create an international extremist community, recruit members, and advocate violence. Due to the high number of global web users using social media, such statements are exposed to a large audience online. There is a significant difference between cyberstalking or cyberbullying and hate speech. For instance, cyberstalking only targets individuals, while online hate focuses on the collective. Individuals and organized groups spread online hate through various means, including online video games, websites, newsgroups, and blogs (Reichelmann et al., 2020). Exposure to online hate increases as users spend more time on the internet services such as Facebook and YouTube. Most importantly, exposure to online hate is the first step of the radicalization process.
People often encounter different types of online hate speech on the Internet, such as gender, education, perception, etc. One of the more frequent types of online hate speech is online religious hate speech. Individuals use cyberspace to promote violence and hatred against a group of people through inflammatory language. According to Castano-Pulgarín and others (2021), Islam is the most attacked religion in the world. Muslims have been framed as violent people who cannot adapt to Western values and are associated with terrorism in different countries. Due to this narrative, people are justifying their opposition to Islam, characterized by hateful comments targeting refugees and immigrants, governments, and any political elite in favor of Islam (Castano-Pulgarín et al., 2021). Therefore, people hate online because a particular religion is associated with terrorist activities.
Since we talked about religious hatred earlier, another type of online hatred that is bound to be mentioned is online racism. On social media, all users have the right to privacy, their identity is protected and they are anonymous. It is possible to express their racist attitudes without being influenced at all. According to Castano-Pulgarín and co, the majority of online haters target Native Americans and blacks. This is evident from the extensive public comment analysis of posts on the CBC News Facebook page (2021). In addition, news articles on Aboriginal people attract the worst comments on social media. Racist individuals share memes that are racially targeting the Aboriginal people. Others are taking YouTube videos without the permission of such people (Carson & Frazer, 2018). Matamoros-Fernández (2017) claims that racist content is often generated and amplified on Facebook, Twitter, and YouTube by using humor. Using the Adam Goodes case study, Matamoros-Fernández demonstrated how racists on Twitter used meme sharing to attack Goodes. Such memes involved images that compared Goodes with an ape and were shared with the help of ‘sensitive media’ filters provided by Twitter to its users (Matamoros-Fernández, 2017). This Twitter media-sensitive filter is used to disguise hate speech, which helps amplify racist speech across platforms.
In addition, there is something about the political nature of hatred spread online. This type of online hate triggered by political mechanisms can be seen in the wake of Trump’s executive order in 2017. From in this type of online hate, democratic and political mechanisms amplify hatred and intolerance towards others. The order banned immigrants from seven countries from entering the United States. Therefore, people can hate online due to political discursive patterns that proliferate racism or religious stereotypes. Finally, gendered online hate is another type of internet-spread hate. This kind of online hate is on the rise, unlike in the past, when online hate speech only focused on ethnicity and nationality. Analysis of over two million Tweets according to the ‘Italian Hate Map’ project indicates that women are the most insulted group, having received over 60% of hateful Tweets. In addition, research shows that over 70% of political women bloggers in other countries have gone through negative experiences online (Castano-Pulgarín et al., 2021). Kilvington (2021) claims that every 30 seconds, an abusive comment on Twitter targets a woman somewhere. The fact that social media dominates people’s lives makes it easier to spread hate speech and reach a wide audience quickly.
Moreover, cyberhate is motivated by the feeling of privacy, security, and safety associated with virtual communication. The composer of the communicative content does not fully acknowledge the intended audience and the output of the message (Kilvington, 2021). This increase in the random and reckless dissemination of hateful content on the Internet is encouraged by the desire for user privacy and anonymity. In addition, younger generations are now exposed to a hostile online life and unfiltered and unregulated content. In the long run, these young people will adopt a culture of online hate and continue to maintain the same habits. Moreover, without regulation, they will believe that what they do is excusable. This freedom makes it easy for them to amplify any negativity that is taking place on Facebook (Carson & Frazer, 2018). Kilvington (2021, pg. 267) states that online hate speech will lead to offline hate speech. This claim is evident in the case of Rohingya Muslims in Myanmar, where hatred was stirred up through Facebook.
Consequences of Online Hate Speech
Many people now counter with “freedom of speech,” arguing that what they say is simply a narrative of their own thoughts that we cannot interfere with. But because hate speech is not the same as ordinary exploratory speech. It should not be covered by principles like free speech. It causes far more harm than ordinary speech. The moment you say it, if your speech is offensive to a specific group of people, it is a harmful act. The harms caused by hate speech include disruption of harmonious community relations, psychological harm to members of the target group, and threats to law and order (Barendt, 2019). In modern societies, every citizen has the right to be assured that they do not have to fear violence because they are members of a particular racial, gender, or religious group. Through this reassurance, a good society is committed to providing a sense of inclusiveness.
In this society, everyone needs to be accorded the dignity they deserve in order to live with confidence. The social status of members of the groups targeted by hate speech is harmed, and hate speech laws uphold their dignity as equal members of society. Hate speech online affects adolescents and young adults more than the general population. Among the negative psychological effects experienced by victims are sleep disturbances, greater anxiety, and fear (Obermaier & Schmuck, 2022). These effects have a significant impact on adolescents who are still developing their personalities. The kind of content they consume on social media will affect their subsequent growth.
Online hate speech manifests in different forms, including insulting, segregation, encouraging exclusion, and calls for violence. It also involves spreading harmful disinformation about an individual or a group based on their gender, race, political or religious beliefs (Garland et al., 2022). According to Kilvington (2021), the Internet has provided a platform for hate groups to spread hate. The Internet has a huge audience and can be accessed easily – which is its advantage; therefore, hate groups can easily find new members and spread hate messages faster. Hate speech spread online eventually leads to hate crimes offline, which have since increased in the current decade. During the epidemic, hate speech against the Chinese community skyrocketed to 200%, according to the survey. Most of these online hate crimes involve online racial hate crimes. Thus, the increase in real-world hate crime cases originates from social media platforms.
Recommendations
Considering the rise in hate crimes that have translated into the offline world, it is important to take necessary actions to combat online hate. Among the recommendations that can be considered to stop the spread of violent content online include the following:
- Twitter, Facebook and YouTube will delete any hate speech as soon as it is discovered, as well as make every effort to find the poster and issue a warning. If three warnings do not work, the account is cancelled. Other users are also encouraged to make good use of the report button.
- People who encounter hate online should report it to organizations that combat it. An important part of fighting online hate is tracking the source and destination of the hate. If a statement is made that is judged to be very serious, that publisher should receive a penalty.
- Groups targeted by hate speech should be supported. The public should adopt a culture of countering online hate speech to support the targeted groups. This can be achieved by spreading neutral messages, flooding comment sections with information that is not relevant to the discussion, or confronting those who spread hate messages.
Conclusion
Internet hate speech is still not getting enough attention, and the group conflicts behind hate speech, its historical background and real-world implications are not getting the attention they deserve. Whether it is simply the expression of one’s dislike for certain things or groups of people, or the phenomenon of real-life social violence caused by online name-calling. With the development of the Internet, the governance of online hate speech must both be put on the agenda as soon as possible.
When it comes to Internet hate speech, the best solution for now is that law to fight him. Clarify the legal responsibility they have to bear after saying these words, as well as the joint and several responsibility that their social platform has to bear, so that it can better create a better cyberspace for the network users.
Reference:
Carlson, B., & Frazer, R. (2018). Social media mob: being indigenous online.
Castaño-Pulgarín, S. A., Suárez-Betancur, N., Vega, L. M. T., & López, H. M. H. (2021). Internet, social media and online hate speech. Systematic review. Aggression and Violent Behavior, 58, 101608. https://doi.org/10.1016/j.avb.2021.101608
Crawford, B., Kasmidi, M., Korompis, F., & Pollnac, R. B. (2006). Factors influencing progress in establishing community-based marine protected areas in Indonesia. Coastal Management, 34(1), 39-64. https://doi.org/10.1080/08920750500379300
Garland, J., Ghazi-Zahedi, K., Young, J. G., Hébert-Dufresne, L., & Galesic, M. (2022). Impact and dynamics of hate and counter speech online. EPJ data science, 11(1), 3. https://link.springer.com/content/pdf/10.1140/epjds/s13688-021-00314-6.pdf
Kilvington, D. (2021). The virtual stages of hate: Using Goffman’s work to conceptualise the motivations for online hate. Media, Culture & Society, 43(2), 256-272. https://doi.org/10.1177/0163443720972318
Matamoros-Fernández, A. (2017). Platformed racism: The mediation and circulation of an Australian race-based controversy on Twitter, Facebook and YouTube. Information, Communication & Society, 20(6), 930-946. https://doi.org/10.1080/1369118X.2017.1293130
Obermaier, M., & Schmuck, D. (2022). Youths as targets: factors of online hate speech victimization among adolescents and young adults. Journal of Computer-Mediated Communication, 27(4), zmac012.https://doi.org/10.1093/jcmc/zmac012
Weston, D. A. (2022). When Does Speech Perform Regulable Action? A Critique of Speech Act Theory’s Application to Free Speech Regulation. International Journal of Language & Law (JLL), 11. https://doi.org/10.1007/s10677-019-10002-0
Wikimedia Foundation. (2023, April 11). Executive order 13769. Wikipedia. Retrieved April 14, 2023, from https://en.wikipedia.org/wiki/Executive_Order_13769
Williams, M. L., Burnap, P., Javed, A., Liu, H., & Ozalp, S. (2020). Hate in the machine: Anti-Black and anti-Muslim social media posts as predictors of offline racially and religiously aggravated crime. The British Journal of Criminology, 60(1), 93-117. https://doi.org/10.1093/bjc/azz049
YouTube. (2014, October 23). Rethink before you type | Trisha Prabhu | TEDxTeen. YouTube. Retrieved April 14, 2023, from https://www.youtube.com/watch?v=YkzwHuf6C2U&t=12s
Be the first to comment