Online hate speech: has the digital world become a breeding ground for hate?

Introduction

In today’s digital age, the Internet provides a broad communication platform for people to exchange their views and opinions with people around the world without leaving their homes and to express their feelings and thoughts, making the dissemination of information faster and more convenient, and as a result, the digital information platform of the Internet has become a crucial part of people’s lives. However, with the popularity of the Internet, the negative impacts of digital information have gradually emerged, and online hate speech is one of the very serious problems, which has brought serious impacts to the society and individuals.

In this paper, we will analyse how online hate speech negatively affects individuals and society. Therefore, we will use TikTok as a case study to see how online hate speech occurs on this popular platform and explore whether the existing regulatory measures are effective or not, as well as suggesting further improvements.

What is online hate speech?

European roma rights centre (2023, May 17)

According to Flew (2021), we can give a definition. Cyber hate speech is speech that spreads hatred, discrimination, fear, and hostility through internet platforms. This speech usually targets specific groups of people and attacks and discriminates based on race, religion, gender, sexual orientation, disability, etc. Online hate speech is not limited to text, but also includes images, videos, and other forms of content. Also based on through our research, we found that online hate speech on TikTok exists in many forms. From derogatory remarks in video comments to the harmful trend of spreading stereotypes, it’s clear that as a short-form media platform, TikTok is not immune to this problem.

Causes of online hate speech.

Internet speech has anonymity (Lipinski, 2002). 

The anonymity of the Internet makes it easier for people to spread hate speech because they do not have to take responsibility for what they say and there is no question of their image in the social circle being affected by making inappropriate comments. According to mainstream media questionnaire research, most of the people who post online hate speech, or cyberbullying, are not as vulgar and rude in real life as they appear to be on the internet, which proves that the cloak of anonymity of the internet helps them to post these inappropriate comments.

Algorithms and filter bubbles (Whittaker, Looney, Reed & Votta, 2021). 

Social media platforms’ algorithms and filter bubbles also contribute to the spread of online hate speech. These mechanisms direct users to information that matches their views, which in turn creates closed circles of information in which hate speech can spread and deepen. This problem is also particularly evident in TikTok, where big data pushes a large amount of content related to the user’s preferences when it learns about them, resulting in users not being able to detect inappropriate behaviours or cyberhate speech even when they are misinformed, or even unconsciously engaging in cyberbullying due to misinformation about the content. At the same time, due to the fact that information on the Internet is mixed and most users do not have the ability to filter information, it is easy to lose themselves in the mixed online world.

Why is online hate speech harmful?

Selma Partners (2019, April 1)

First, online hate speech exacerbates social divisions and antagonisms. It incites hostility and hatred among people, leading to social tension and disharmony. Secondly, individuals attacked by online hate speech may suffer psychological trauma, including anxiety, depression, and even suicidal tendencies. For example, in the Chinese version of TikTok, there was a vicious incident in which the victim committed suicide because of being cyber-violated for dyeing pink hair. According to Chen (2021), hate speech on TikTok has had a huge negative impact on college students in China. Such speech not only harms the self-esteem and dignity of the victims, causing individuals to suffer physical and psychological trauma, but also may damage their interpersonal relationships and social status. At the same time, the existence of online hate speech has made social media a hotbed of hatred and fear. It pollutes the social media environment and affects the normal functioning of social media platforms.

Does hate speech exist on TikTok?

Hate speech on TikTok takes many different forms, some of the main forms include:

Discriminatory speech targeting specific groups: some users may post discriminatory speech targeting specific groups, such as speech that is offensive and discriminatory based on race, religion, gender, sexual orientation, physical characteristics, etc. (Flew, 2021).

Video content that incites hatred and hostility: Some users may post video content that incites hatred and hostility, which may include content that is derogatory, stigmatising, or suggestive of violence against a specific group of people.

Hate speech in comment sections: In video comment sections, users may post hateful comments, targeting and insulting video content or other users.

Content that Spreads Stereotypes and Negative Stereotypes: Some users may post content that spreads stereotypes and negative stereotypes, which in turn fuels hatred and hostility towards specific groups of people.

Such hate speech can be psychologically and emotionally damaging to the group being attacked and contribute to hatred and hostility in society. Although TikTok has taken steps to regulate and remove such content, as well as introducing IP address displays and other means to discourage such incidents as much as possible, there is still a degree of challenge due to the platform’s large user base and volume of content.

TikTok’s existing speech regulation measures

xBritannica, T. Editors of Encyclopaedia (2024, April 8)

TikTok has implemented a few speech regulation measures to address inappropriate content on the platform, including hate speech, according to Shake’s internal management documents. These measures mainly include:

Community guidelines and regulations: TikTok has a clear set of community guidelines and regulations that set out a code of conduct for users posting content on the platform. These guidelines include prohibiting the posting of inappropriate content such as hate speech, violent content, bullying, etc. When a user touches the red line of freedom of speech, the platform will take action to ban or block the user.

Automated Filtering and Review Systems: TikTok uses automated filtering and review systems to detect and filter inappropriate content, which may include hate speech. These systems use algorithms and artificial intelligence to help identify and remove offending content, and when hate speech is detected by screening, it is not displayed, and the poster is given a warning.

User Reporting and Content Review Team: TikTok encourages users to report inappropriate content, including hate speech. Users can easily find a complaint button to report hate speech and other related objectionable statements by uploading relevant evidence. The platform also has a dedicated content review team that reviews user reports and handles violations, giving feedback to reporting users as well as warnings and penalties to reported users.

Partners and Expert Consultation: TikTok works with a variety of organisations and experts, including NGOs, academics and government agencies, to develop and improve speech regulation strategies and provide training and guidance.

Does over-regulation infringe on freedom of speech? 

Online hate prevention institute (2016, March 11)

As governments and platforms continue to tighten their regulation of hate speech, a new confusion arises as to whether over-regulation will infringe on citizens’ right to freedom of expression. Freedom of speech is an important fundamental human right that guarantees the freedom of individuals to express their views and opinions. However, when regulatory measures are excessive or unreasonable, they may restrict people’s freedom of expression. Thus, over-regulation may indeed lead to infringement of freedom of expression. This requires that when setting standards for the regulation of expression, a balance be struck between freedom of expression and the harms of undesirable expression, so that neither undesirable expression is prevented from being published nor legitimate expression is unfairly restricted, and excessive interference with the right to freedom of expression is avoided. In addition, reasonable complaint mechanisms are provided to ensure that freedom of expression is protected (Inobemhe, 2021).

Improvements

Naganna Chetty (2018, May 3).

By studying Facebook: Regulating Hate Speech in the Asia-Pacific Region (Matamoros-Fernández, 2017) and Platform Racism: The Australian Race Controversy (Sinpeng, Martin, Gelber & Shields, 2021) we can analyse the strengths and weaknesses of the existing regulatory measures.

these two examples of hate speech, we analyse the strengths and weaknesses of existing regulatory measures and propose new and constructive recommendations for improvement.

In the face of the threat of online hate speech, individuals, social platforms, and governments need to take proactive countermeasures: first, governments should strengthen regulation and enforcement of online hate speech and combat malicious speech through legislation and legal means.  For example, Australia’s Online Safety Act 2021 strengthens the powers of the E-Safety Commissioner and introduces new rules against issues such as cyberbullying, image-based abuse and illegal or restricted online content (Humphry, Boichak & Hutchinson, 2023). At the same time, relevant authorities, as well as organisations, can enjoy oversight and management capabilities and regulatory responsibilities, according to Sinpeng, Martin, Gelber & Shields (2021), Facebook has been the subject of a complaint by the Australian Muslim Advocacy Network (AMAN) to the Australian Human Rights Commission for racially discriminatory hate speech.

Secondly, the responsibility of social media platforms should be strengthened. Social media platforms should strengthen content auditing and management and delete and stop the spread of online hate speech in a timely manner; meanwhile, Raut & Spezzano (2023) suggested that most of the users who post hate speech have some similar characteristics, so the platforms can also enhance the detection of hate speech according to the characteristics of the users. Finally, public awareness should be raised. Individual members of the public should be more aware of online hate speech and be more vigilant in not easily believing and spreading inaccurate information and malicious speech, and not using the Internet to post hate speech and engage in cyberbullying.

Conclusion

Online hate speech is a serious challenge facing today’s society, which not only undermines social harmony and stability, but also causes serious psychological and emotional harm to individuals. On social media platforms like TikTok, we often see the presence of hate speech, which may be directed at specific groups, targeting factors such as race, religion, gender, and sexual orientation. Not only do these comments make the individuals being attacked feel humiliated and sad, but they also further divide and antagonise society. More seriously, some people affected by online hate speech may suffer from mental health problems such as anxiety, depression and even the risk of suicide. Therefore, all of us should actively participate in the fight against online hate speech. Whether as users of social media platforms or ordinary netizens, we can maintain a more harmonious, fair and friendly online environment by reporting inappropriate comments and not participating in the dissemination of malicious information. Only by working together can we create a digital space where people feel safe and comfortable, and where everyone can freely express their views and opinions without fear of being victimised by hatred and discrimination.

Reference list

Chen, T. (2021). The Influence of Hate Speech on TikTok on Chinese College Students (master’s thesis, University of South Florida).

Flew, T. (2021). Hate Speech and Online Abuse. In T. Gillespie, S. J. Boczkowski, & K. A. Foot (Eds.), Regulating Platforms (pp. 91–96). Cambridge University Press.

Humphry, J., Boichak, O., & Hutchinson, J. (2023). Emerging Online Safety Issues: Co-creating social media with young people-Research Report.

Inobemhe, K. (2021). Social Media Regulation in a Democratic Nigeria: Challenges and Implication. Media & Communication Currents, 5(1), 71–88.

Lipinski, T. A. (2002). To speak or not to speak: Developing legal standards for anonymous speech on the Internet. Informing Science, 5, 95.

Matamoros-Fernández, A. (2017). Platformed racism: The mediation and circulation of an Australian race-based controversy on Twitter, Facebook, and YouTube. Information, Communication & Society, 20(6), 930–946.

Raut, R., & Spezzano, F. (2023). Enhancing hate speech detection with user characteristics. International Journal of Data Science and Analytics, 1–11.

Sinpeng, A., Martin, F. R., Gelber, K., & Shields, K. (2021). Facebook: Regulating Hate Speech in the Asia Pacific. Facebook Content Policy Research on Social Media Award: Regulating Hate Speech in the Asia Pacific.

Whittaker, J., Looney, S., Reed, A., & Votta, F. (2021). Recommender systems and the amplification of extremist content.

Images

Britannica, T. Editors of Encyclopaedia (2024, April 8). TikTok. Encyclopedia Britannica. [Photograph]. Retrieved from https://www.britannica.com/topic/TikTok

European roma rights centre (2023, May 17). New activist research exposes lack of action on anti-Roma hate speech online. [Photograph]. Retrieved from https://www.errc.org/press-releases/new-activist-research-exposes-lack-of-action-on-anti-roma-hate-speech-online

Naganna Chetty (2018, May 3). Aggression and Violent Behavior. [Photograph]. Retrieved from https://www.sciencedirect.com/science/article/pii/S1359178917301064

Online hate prevention institute (2016, March 11). The lines of Laws and Norms. [Photograph]. Retrieved from https://ohpi.org.au/the-lines-of-laws-and-norms/

Selma Partners (2019, April 1). April SELMA focus: The consequences of hate speech. [Photograph]. Retrieved from https://hackinghate.eu/news/april-selma-focus-the-consequences-of-hate-speech/

Be the first to comment

Leave a Reply