What is hate speech?
Hate speech is a public speech that expresses hatred or encourages violence against individuals or groups based on factors such as race, religion, gender, or sexual orientation(Sinpeng, 2021). According to the research,20% of adults have been harassed because of their political views expressed online,41% have personally experienced online harassment, and 25% have experienced serious harassment(Atske, 2021). Online hate speech not only causes harm to individuals but can also lead to social division and violence. So how to reduce hate speech and the harm it causes is critical.
Do you understand the harm of hate speech?
According to Axel Bruns, the current social trend is the increasing prevalence of hate speech online, with attacks on women, and racial, religious, and sexual minorities more likely to occur in public discourse. This phenomenon may be related to the anonymity of the Internet and its widespread use of the Internet. It is often easier for people to say things online that they would not dare or be able to say in real life, which creates a breeding ground for hate speech (Rainie, 2022). According to BBC News(2021), misogyny and misogyny should also count as hate speech.
The system of hate speech on the Internet is flawed, which makes the process of safeguarding rights more difficult.
The case of the pink hair girl
Recently, a piece of news triggered people’s reflection on online violence and social media platforms’ efforts to regulate hate speech (MoneyControl, 2023). The subject of this photograph committed suicide in January 2023 after battling depression for six months. The reason was that she posted pictures of herself with pink hair and celebrated her college entrance with her bedridden grandfather on the popular Chinese social media site Xiaohongshu. However, the comments section was flooded with negative comments, with one saying “Why do grad students dye their hair like barmaids?” “Like a hostess in a nightclub,” said one comment. Some even harassed her through private messages, spread photos of her and her grandfather, and spread rumors that a young woman with pink hair married an older man. At first, Linghua chose to confront the problem head-on, suing cyber trolls and hate mongers, but the abuse took a serious toll over time, and she eventually committed suicide due to depression.
When Linghua first called the police, the police responded, “No name was given and it cannot be defined.”(MoneyControl, 2023)
This incident has caused people to think about how easily hate speech can affect people, and how difficult it is to protect the rights of victims. Because ordinary people initiate rights protection, it is difficult to find the specific information of the Internet violator or infringer.
At present, it can only sue the network platform company first and ask the platform company to provide the identity information of the infringer. However, Linghua is faced with numerous difficulties to locate Internet offenders, marketing accounts, and multiple Internet platforms.
Rainie(2017) argues that tech companies, similar to social media, have little incentive to control incivility because hate, anxiety, and anger generate conversation and engage with the platform when the conversation is high and product placement is lucrative. Therefore, making policies is the regulation that social media platforms must fulfill.
In Linghua’s case, she just sent a post sharing her life, and the result was devastating. According to the investigation, now that her account still exists, the hateful comments on her previous posts have been deleted and replaced by thousands of comments expressing regret for her death and hatred for Internet users. This makes us wonder whether it was the pressure of public opinion that made the trolls delete the comments, or did they delete the comments out of guilt, or did the platforms delete the comments after knowing it was happening? Either way, the platform has received tens of thousands of traffic and attention due to the incident. Is this due to the platform’s poor supervision?
What can the platforms do to avoid the problems? Take Xiaohongshu & Instagram as an example
The community code of XiaoHongshu, it prohibits acts that cause ethnic hatred, and ethnic discrimination, destroy ethnic unity, insult or slander others, and infringe upon the legitimate rights and interests of others.
In XiaoHongshu’s community covenant, it advocates empathy, friendly communication, and telling XiaoHongshu what you don’t like through reporting and dislike buttons to avoid amplifying potential harm.
According to XiaoHongshu’s violation handling guidelines (2021), its criteria for identifying hate are manual investigation and discovery of violations or XiaoHongshu algorithm system investigation. Content processing such as a. Deduct cheating data b. Limit display scope c. Forbid display d. Legal responsibility shall be investigated in accordance with the law
Account processing limit account functions, such as cheating accounts cannot apply to the creator b. Limit display scope c. Account gagging d. account blocking e. Investigate legal liability
Instagram and Facebook have similar Community Guidelines.
Instagram Guidelines regarding Cyberviolence Punishment: If you see content that you believe may not comply with the community rules, please notify us through the built-in reporting option. Our global team will review these reports and remove content that violates the community code as soon as possible(Instagram, 2023).
With these rules, it still asks too much for people’s quality and consciousness are too high, but once the situation gets out of control, it will cause negative incidents. So the power of technology is expected to step in to prevent hate speech from causing harm.
The use of AI in combating hate speech
Artificial intelligence (AI) is expected to play a positive role in combating hate speech in the future. AI can monitor and analyze vast amounts of data on social media to better understand the types of hate speech, where it comes from, and how it spreads. By analyzing the data, AI can identify and predict possible hate speech and automatically review content posted on social media, and increase preventive measures by refusing to publish or reminding others to modify content similar to hate speech. College (Davidson et al., 2017)
It is difficult to define the language of cyber violence. Cyber violence, at the legal and practical level of the definition, is relatively vague. In the case of Zheng Linghua, there are a lot of indefinable remarks. For example, “raggedy girl”, “garbage”, “disgusting”, “and Internet celebrity hype”? “Add drama to yourself.” Some comments use abbreviations, or memes, which do not qualify as a qualitative term for cyberbullying. Whether it constitutes insult or libel in criminal cases needs to be decided by the court at its discretion.
Advances in Natural Language Processing: As natural language processing technology continues to evolve, AI will be able to more accurately recognize and understand language to better distinguish hate speech from ordinary speech. In addition, AI can help detect and recognize hidden meanings and symbols, as well as automatically translate hate speech into different languages.
It’s important to note that AI still has some challenges and limitations, such as miscalculation and invasion of privacy. Therefore, when applying AI technology to combat hate speech, it is necessary to balance security and freedom and take appropriate measures to protect people’s privacy and freedom of speech.
Therefore, To cover the difficulty, social media can adopt a model combining artificial judgment and AI to better prevent the negative effects of cyber violence. In fact, social media platforms can exert their subjective initiative to establish the mechanism of online speech, which is equivalent to everyone being a juror, who can judge whether this comment belongs to the definition of cyber violence or not. If most people think that this comment constitutes cyber violence, Then the platform can directly ban his account, take it off the shelf, screen, and other processing methods.
The challenge for social platforms is how to balance free speech with hate speech.
But freedom of speech is not in conflict with the prevention of hate speech. According to United Nations Secretary-General Antonio Guterres, (2019) “Addressing hate speech does not mean limiting or prohibiting freedom of expression. That means preventing hate speech from escalating into something more dangerous, particularly incitement to discrimination, hostility, and violence, which are prohibited under international law.”
So what we need to do is, on the one hand, social platforms should protect the right of free speech and allow users to freely express their opinions and opinions. On the other hand, social platforms also need to prevent the spread of hate speech and violent content and protect users from harm. Most of the students believe that society is capable of protecting our First Amendment rights and curbing hate speech, and according to this view, protecting hate speech seems unnecessary and harmful (Emily, 2017).
To balance free speech and hate speech, social platforms can establish clear community guidelines that explicitly prohibit the Posting of hate speech and violent content. These guidelines should be fair and transparent so that users know when their comments may violate the platform’s rules.
Secondly, use more mature technology: Social platforms can use artificial intelligence and machine learning techniques to automatically detect and identify possible hate speech and violent content.
Hire professional moderators: Social platforms can hire professional moderators to review content posted by users. Moderators need to receive professional training to ensure that they can correctly identify hate speech and violent content and take appropriate action in accordance with the platform’s regulations.
Giving users more control: Social platforms can give users more control by allowing them to filter and block content that might make them uncomfortable. This approach protects users from unnecessary harm while also allowing them to maintain the free expression of their views. Users will feel more comfortable sharing equally on social media only when the platform ensures that it can play its full role and protect users from harm.
Online hate speech not only causes harm to individuals but can also lead to social division and violence. In order to solve this problem, it is necessary to take a series of measures, such as a legal system, educational publicity, social supervision, etc., to reduce and curb the spread and influence of hate speech on the Internet.
It is suggested to solve the occurrence probability of cyber violence from three aspects: the rational literacy of netizens and the sense of personal self-discipline, the mainstream media taking the initiative to take responsibility, and the improvement and implementation of the relevant moral norms and legislation of network media.
At the same time, individuals should be sensible when using the Internet and social media, abide by public order and good customs as well as relevant laws and regulations, and jointly build a harmonious, inclusive, and peaceful cyberspace.
Atske, S. (2021) 2. characterizing people’s most recent online harassment experience, Pew Research Center: Internet, Science & Tech. Pew Research Center. Available at: https://www.pewresearch.org/internet/2021/01/08/characterizing-peoples-most-recent-online-harassment-experience/.
BBC News (2021) Sex-based hostility should be hate speech, recommends report, BBC News. BBC. Available at: https://www.bbc.com/news/uk-politics-59554199 (Accessed: April 15, 2023).
Davidson, T., Warmsley, D., Macy, M., & Weber, I. (2017). Automated hate speech detection and the problem of offensive language. Proceedings of the International AAAI Conference on Web and Social Media, 11(1), 512–515. https://doi.org/10.1609/icwsm.v11i1.14955
Emily E, (2017), Is Supporting Racists’ Free Speech Rights the Same as Being a Racist?, CATO Institute, Retrieved April 11, 2023, from https://www.cato.org/blog/supporting-racists-free-speech-rights-same-being-racist
Instagram Help center. (2023). Retrieved April 15, 2023, from https://help.instagram.com/581066165581870
Rainie, L. (2022, September 15). The future of free speech, trolls, anonymity and fake news online. Pew Research Center: Internet, Science & Tech. Retrieved April 11, 2023, from https://www.pewresearch.org/internet/2017/03/29/the-future-of-free-speech-trolls-anonymity-and-fake-news-online/
MacAvaney, S., Yao, H.-R., Yang, E., Russell, K., Goharian, N., & Frieder, O. (2019, August 20). Hate speech detection: Challenges and solutions. PLOS ONE. Retrieved April 12, 2023, from https://doi.org/10.1371/journal.pone.0221152
Nockleby JT. (2000)Hate Speech. Encyclopedia of the American Constitution. 3:1277–79
Sinpeng, Martin, F. R., Gelber, K., & Shields, K. (2021). Facebook: Regulating Hate Speech in the Asia Pacific. Department of Media and Communications, The University of Sydney.
United Nations. (2019.). Hate speech versus freedom of speech. United Nations. Retrieved April 12, 2023, from https://www.un.org/en/hate-speech/understanding-hate-speech/hate-speech-versus-freedom-of-speech#:~:text=%E2%80%9CAddressing%20hate%20speech%20does%20not,is%20prohibited%20under%20international%20law.%E2%80%9D