Under the guise of free expression on the Internet —from the perspective of hate speech

Figure1.1 “Hate speech versus freedom of speech” by United Nations

With the development of Internet technology and the speed of information transmission, social media have unknowingly penetrated our daily life. One of the most important features of social media is that people can communicate with each other across physical distances, but while social media brings freedom of expression, the amount of hate speech is also increasing and endangering people’s daily lives. Hate speech impacts people’s right to positive freedom of expression and has certain negative social consequences. The differences in history and culture between countries and regions lead to differences in how people perceive things, so it is important that social media platforms be regulated. Only by promoting policies and rules can more efficiently define the line between the right to freedom of expression and hate speech. This article will analyze several aspects of what hate speech is, and the consequences of hate speech, and platform regulation.

What is hate speech?

Figure 1.2 “Interview with Special Advisor on Genocide Adama Dieng on Hate Speech” by United Nations

Social media platform builds a new cultural space to transmit the information, which it is not only promote each other’s communication, and also it enables the growth of diversified information in cyberspace. But this diversity of information can be used by bad people or groups to spread information that is illegal and detrimental to social stability on online platforms. Among them, hate speech is the most dangerous and causes the most concern around the world. Hate speech can be understood as a form of expression that causes the same harm to people as more obvious physical harm (Sinpeng et al., 2021). Hate speech is basically targeted at marginalized groups and discriminates them. Hate speech cannot be understood only as speech that offends someone or hurts their feelings, but as speech that deepens and causes harm with time (Sinpeng et al., 2021). This also means that the content of hate speech is limited by time and place and is constantly changing with the environment and the issues that society is developing. For example, it now includes women, blacks, Asian groups, or religious groups, etc. Hate speech may include discriminatory statements, verbal incitement to target groups, or violence against target groups (Sinpeng et al., 2021). Therefore, hate speech should be seen as targeting specific physical characteristics or groups with specific cultural characteristics. Types of persecution speech include public expressions of hatred, discrimination, and direct or indirect participation in collective bullying and violence. With the development of new media platforms, hate speech uses social media platforms as a medium and uses new media technologies such as images and videos to spread and incite people’s emotions.

How do social platforms amplify hate speech?

Figure 1.3 “Hate speech has soared online since George Floyd’s death” by Lazaro Gamio / Axios

  • The impact of the functionality of social media platforms on hate speech

Social media has given people more rights to express their words, while giving everyone equal access to information and the right to expression. These apparently equal rights of access to information have actually triggered deeper inequalities. Terry (2021) suggests that digital platform companies have been slow to act on hate speech and that platforms such as Google and YouTube have not taken on the regulatory responsibility that digital platforms should do. Therefore, the policy management of digital platforms and the ideological influence on the Internet have an inescapable responsibility for amplifying hate speech. The tolerance included in freedom of expression makes hate speech allow more people to hurt marginalized groups. The Internet not only expresses racial identity opportunities, but also reproduces rights and hierarchies. This looks from a discourse perspective, which increases the dynamics of racism through the features, policies, algorithms, and corporate decisions in the platform (Matamoros-Fernández, 2017).

The expressions of emotion and hatred in online communication can infect each other and increase the spread. For example, hate speech can be expressed not only in the form of typing or talking on the Internet, but also through the use of images and videos. Such diverse forms of expression are more likely to transmit and exaggerate hateful attitudes and to be widely disseminated.

In addition, the anonymous feature of the Internet allows users to freely develop their own opinions ( MacAvaney et al., 2019). While the promotion of anonymity on platforms has avoided discrimination to some extent, it has also led to a decrease in the binding of expression (Matamoros-Fernández, 2017). Because of the decentralized nature of the Internet this allows hate speech to be published anonymously without any scrutiny ( MacAvaney et al., 2019). This anonymity allows people to participate in the gathering of hate speech without regard to real-life pressures or fear of being discovered.

  • The impact of new media platforms’ algorithms on hate speech

Algorithms as an important part of platform operations largely influence the visibility of trending topics (Matamoros-Fernández, 2017). People can search for hashtags based on hot trends, which also means that many users can take advantage of the algorithm’s push to create and spread hate speech to harass victims. When extremism spreads video or speech through platforms, the speed of platform regulation is not as fast as the spread of hate speech, which makes it impossible to effectively regulate hate speech. In addition, when violent hate speech hits the hot topics through hashtag, it will spread to any corner of the world in a short time. These are the behaviors that can easily cause groups to riot.

Detecting hate speech is a challenging task, because everyone has different opinions on how to define hate speech. This means that according to each person’s different definition of speech, the collection of hate speech data on the internet becomes difficult to identify (MacAvaney et al., 2019). For example, the platform has made rules to prohibit the Posting of hate content, but it allows parody and humor of content related to hate speech (Matamoros-Fernandez, 2017). Facebook and YouTube believe that by allowing content related to hate speech that is humorous and parody, they are protecting users’ rights to express unpopular views. These people use satire to cover up discrimination against marginalized groups or racism, so the platform to protect the freedom of expression of humor is to increase the discrimination and harm to the victims.

The serious consequences of hate speech

Figure 1.4 “Feeling dislike or even hatred towards an individual or social group is not and should not be a crime” by Vincent Yu/AP

The serious consequences of hate speech on the internet are the failure to eliminate information, the impact on the network environment, and the impact on real activities, respectively. The connection of real society and cyberspace allows online emotions to evolve into real-life acceptance and approval of hate speech, which even triggering offline violence and jeopardizing social security. For example, in March 2019, a terrorist attack killer in New Zealand live-streamed the massacre of the mosque on Facebook and for more than 10 minutes, and the live stream subsequently spread to other media platforms. The killer’s content can be found on social media platforms with a lot of racist content related to Muslims, terrorist attacks, and white supremacy (S Every-Palmer et al., 2021). His hate speech on social media platforms has resulted in violence in the real world and has caused extreme emotional trauma to many racially injured people because of the live streaming format. Facebook was too slow to act in this case and it did nothing to stop the live streaming of the killer. This has led to Facebook being resisted by people from New Zealand and hate speech on social media must be regulated and reviewed.

Because of the low threshold of social media, it increases people’s freedom of expression on the Internet. It is a perversion of power, just like Facebook live streaming of terrorist attacks to various platforms will be saved forever. Any one of these transmissions is global and timely, and the content covers hate speech that has a long-lasting and potentially damaging effect on the victim community. Secondly, this negative news is the result of algorithmic hot topics and tweets that contaminate the online environment and seriously threaten the security of the internet and reality. This matter will generate secondary discussions and even secondary attacks in the time that follows, which will cause ongoing harm to marginalized groups. In addition, for the content of the killer’s hate speech before the terrorist attack is excluding and dividing different social groups that causing explosive damage to the social order. Finally, the failure of Facebook to take action when the killer live-streamed the entire shooting, and the other platforms that allowed the video to be shown, which are examples of the lack of clarity in the regulation of hate speech in digital media. Its test for hate speech is a closed-loop system, and while the platform side is using algorithms to actively remove hate messages, but users who wish to deliver hate speech will still quickly find ways to circumvent these measures. For example, using images that contain content instead of the text itself (MacAvaney et al., 2019). It is difficult for digital platforms to identify freedom of expression situations with discriminatory and hateful ideas. There is a serious crisis of trust in Internet platforms for the audience groups to which platforms they are intended.

About Facebook’s internal regulation

As the number of violent acts faced around the world is increasing, the public concern about these matters on the internet is also increasing. Therefore, the public opinion in society is more inclined to give more responsibility to platforms for regulation and governance. Facebook’s content regulation area is expanding to include not only machine review, but also human review. But the reviewers are from the volunteer labor of platform users, and outsourced review services (Sinpeng et al., 2021). Facebook’s culture operates on the global side, which leads to reviewers needing to understand local information and the policies and cultures of different countries that can make constant adjustments to regulation.

In addition, Facebook maintains an essentially equal attitude towards the content posted by everyone, so when a spam message is put together with a racist message, they are treated the same way (Siapera & Viejo-Otero, 2021). This does not provide any serious punishment for those who spread hate speech, and they can re-register using other registration methods when their accounts are banned.

Some solutions for identifying hate speech

Figure 1.5 “Facebook extending ban on hate speech to white nationalism, white separatism” by CBC

As I mentioned before users from different cultures have some different opinions on the accuracy of review internal to hate speech. Therefore, platforms need to do more to ensure consistent application of hate speech policies and review decisions, as well as provide more detailed feedback on what is included for those who report and are reported (Sinpeng et al., 2021). IT companies need to update their algorithms for identifying hate speech, train their employees, and enter cooperative relationships with other platforms.

Just like for users who use images to spread hate speech can use optical character recognition to solve this problem, but this also proves difficult to detect for these statements (MacAvaney et al., 2019). In addition to the low threshold for publishing content, the anonymity of the characteristics should also be appropriate to increase transparency in order to ensure that users are not harmed by the speech. Although this practice can lead to concerns about users’ privacy, they may also fear punishment for posting the speech. Building laws, technical regulation and education are all essential to solving the problem of hate speech.

Conclusion

Although social media has brought users more freedom of expression, their right to equality on the Internet has also been violated to some extent. Hate speech on the Internet can also be demonstrated to have a powerful impact on the real society and to lead to the spread of more serious hate speech through the example of terrorist attacks. Overall, as the trend of the Internet becomes closer to society, it becomes particularly important to regulate the platforms and solutions for hate speech.

Reference

MacAvaney, S., Yao, H.-R., Yang, E., Russell, K., Goharian, N., & Frieder, O. (2019). Hate speech detection: Challenges and solutions. PloS One, 14(8), e0221152–e0221152. https://doi.org/10.1371/journal.pone.0221152

Matamoros-Fernández, A. (2017). Platformed racism: the mediation and circulation of an Australian race-based controversy on Twitter, Facebook and YouTube. Information, Communication & Society, 20(6), 930–946. https://doi.org/10.1080/1369118X.2017.1293130

Sinpeng, A., Martin, F. R., Gelber, K., & Shields, K. (2021). Facebook: Regulating Hate Speech in the Asia Pacific. Department of Media and Communications, The University of Sydney.

Siapera, E., & Viejo-Otero, P. (2021). Governing Hate: Facebook and Digital Racism. Television & New Media, 22(2), 112–130. https://doi.org/10.1177/1527476420982232

S Every-Palmer, R Cunningham, M Jenkins, & E Bell. (2021). The Christchurch mosque shooting, the media, and subsequent gun control reform in New Zealand: A descriptive analysis. Psychiatry, Psychology, and Law, 28(2), 274–285. https://doi.org/10.1080/13218719.2020.1770635

Terry (2021) Regulating Platforms. Cambridge: Polity, pp. 91-96.

Be the first to comment

Leave a Reply