Free speech is not equal to hate speech: Racial discrimination in social media platforms

With the rise of social media platforms, more and more people are willing to send blog posts, forward or comment on social media to express their different attitudes and opinions. Barlow (1996) proposed in his Declaration of Independence of Cyberspace, “We are creating a world that all may enter without privilege or prejudice accorded by race, economic power, military force, or station of birth.” The rise of social media software has provided people with an online platform where people can speak freely. However, the emergence of some negative remarks broke this desired vision.

The existence of phenomena including but not limited to hate speech (such as online harassment and cyberbullying, etc.) will cause specific physical or psychological harm to certain people and groups. Moreover, hate speech on social media platforms will target multiple groups of people.

According to the research from eSafetyCommisioner (2019), “Religion, political views, race and gender were the most common reasons cited in both Australia and New Zealand for experiencing hate speech.” This blog will focus on analysing racist hate speech mentioned in the study.

Understanding hate speech

According to Flew (2021), “Even when we place a very high value upon free speech as an indispensable manifestation of freedom of thought and an instrument of human development, political life, and intellectual advancement, hate speech does appear to be objectionable because it promotes mistrust and hostility in society and negates the human dignity of the targeted groups.”

Therefore, free speech is the foundation for debate and attempts to encourage the unrestricted exchange of ideas and information. However, hate speech frequently has adverse direct or indirect effects and is meant to hurt or exclude specific people or groups.

According to Parekh (2012), “hate speech expresses, encourages, stirs up, or incites hatred against a group of individuals distinguished by a particular feature or set of features such as race, ethnicity, gender, religion, nationality, and sexual orientation.”

Countries with large immigrant/foreign populations are often more prone to discrimination and hatred, taking into account different cultures, races, genders and religious beliefs.

“In 2020, 21.4% of all Oceania residents–including Australia, New Zealand and various Pacific island nations and territories – were international migrants (Natarajan et al., 2022).” Therefore, compared to other countries, Australian countries with a cosmopolitan population are more likely to experience racially targeted hate speech. Like countries, social media platforms are also made up of diverse identity groups; this is bound to lead to different opinions and expressions.

Case study of racist hate speech

A community is usually made up of a diverse group (e.g. gender, race, religion, etc.), and the same is true for users of social media platforms. A university, especially one with a long history, welcomes people from all over the world. Take the University of Sydney as an example: According to the official website of the University of Sydney, USYD shows the public that it welcomes people with different identities to join them.

However, despite the positive attitude shown by the school, not all students recognise and respect the existence of diverse groups. According to Saha et al. (2019), “Hateful speech bears negative repercussions and is particularly damaging in college communities.” By browsing different social media platforms and asking others about their experiences with hate speech, an account operated as anonymous submissions on Facebook became a hot spot for posting racist hate speech.

The IDs of these two accounts are “USYD Rants 2.0” and “UNSW Rants Revived“, respectively. Although the latter ceased operations in 2020, the former is still being updated until now; its last post was on March 30, 2024. It should be noted that these two accounts are not official accounts certified by the university. As shown in the ID, the account owner operates it as a venting platform; it will provide a questionnaire link to the audience, namely USYD and UNSW students, to fill in the content of venting and complaining. The content received will be published anonymously, meaning users browsing the account will not know who sent each rant.

Through observation and retrieval, two noteworthy posts reflected severe hate speech against Chinese international students.

Figure 1. The post was published on March 20, 2018 (it has been deleted)
https://m.163.com/dy/article/DDH4L33V0516I3CV.html

The post by UNSW Rants Revived is the most obviously offensive of the three cases. The contributor directly expressed the attitude of “hate Chinese” in the first sentence, used terrible words to attack Chinese international students, and compared this group with Indian and Indonesian international student groups in the subsequent content.

Figure 2. Posted on January 10, 2024

This post posted by USYD Rants 2.0 also shows hostility towards the international student community in China. The difference is that the contributor initially revealed his/her identity as a Vietnamese student; this directly caused a confrontation between the two groups. In addition, the contributor also believes in the content that the user who viewed the post hates the Chinese international student community as much as he/she does; that is to say, the contributor believes that this kind of targeted hate speech is a common phenomenon.

Figure 3. Posted on March 3, 2024

“Racist and similar arguments are often couched in ‘scientific’ language or presented as jokes or as ironic commentary (Flew, 2021).” The contributor does not consider her/himself a racist but spoke in a tone that sounded rational to him about Chinese international students.

Analysis of the case study

Contributors who make these comments may simply describe or complain about a phenomenon when posting them on social media platforms. They claim that they are making free speech and may not realise that their speech has been converted into hate speech for a specific group. This is because the comments in the post do not complain about a particular person but because of a specific person. They have been raised by an entire group of international students from China. The person who made the speech mentioned groups based on nationality in their text content, which means that this seemingly simple free speech has completely turned into a racial discrimination speech in their vocabulary.

As non-native speakers who travel to a foreign country alone, Chinese students studying in Australia usually need to spend a lot of time learning English to ensure they can study and survive abroad. Taking my own experience as an example, when I first entered school, I often worried that native speakers would ignore me because I felt inferior in my speaking and listening skills.

However, some sensitive Chinese international students will aggravate their fear of being discriminated against when they read these racist hate speeches and thus become more afraid of speaking English/communicating with locals. Parekh (2012) states, “Because hate speech intimidates and displays contempt and ridicule for the target group, group members find it difficult not only to participate in the collective life but also to lead autonomous and fulfilling personal lives.” This makes Figure 1. The hateful behaviour of Chinese contributors has become more serious; their hateful rants have not achieved the purpose of promoting increased communication between Chinese international students and locals. In this case, Chinese students would instead communicate with people of the same nationality in their native language – as this may give them a greater sense of security.

It is also worth noting that some Chinese international students will be angry: they will fight back against these racist remarks because they believe that being treated in this way is rude and unreasonable when they abide by the law. In striking back, their comments may generate new hatred against other groups. This confirms what Saha et al. (2019) mentioned in their study: “The exposure to hate leads to greater stress expression. However, everyone exposed is not equally affected; some show lower psychological endurance than others.”

Moderation

It is essential to draw a clear line between free and hate speech. This raises a question worth thinking about: How can managers of social media platforms create rules to limit and regulate hate speech content?

It is undeniable that Facebook has already made efforts to combat hate speech. Facebook Community Standards says, “We don’t allow hate speech on Facebook. It creates an environment of intimidation and exclusion, and in some cases may promote offline violence (Meta, n.d.).” Allan (2017) also pointed out, “Over the last two months, on average, we deleted around 66,000 posts reported as hate speech per week — that’s around 288,000 posts a month globally.” However, as shown in Figure 1., general hate speech still exists on Facebook.

It should be noted that personal accounts on Facebook cannot be posted anonymously; the anonymous mode only exists in group posts. Facebook mentioned in its Help Centre: “If you post anonymously, your name and profile picture will still be visible to the group’s admins and moderators, as well as to Facebook (Facebook, n.d.).” That is, Facebook’s regulators can review and ban hate speech posted anonymously in the group.

However, an account like USYD Rants 2.0 allows it to avoid platform review: the account operator cannot know the contributor’s identity, and the post is anonymous to the browsing users; this directly breaks Facebook’s existing rules.

Figure 4. Screenshot of USYD Rants 2.0 submission link

The terms and conditions provided by USYD Rants 2.0 mention, “No discriminatory rants will be published”. This rule seems standard but is not valid – the post in Figure 2. strongly proves this point. No one knows how the operator determines whether the submissions received are hate speech, and due to the particularity of its two-way anonymity, the Facebook platform cannot handle the contributor’s account.

Therefore, Facebook should strengthen its review of content posted by anonymous contribution accounts on its platform or formulate new rules. This type of account and the content it publishes usually require manual supervision by the moderator of the social media platform. According to Roberts (2019), “Content moderators serve an integral role in making decisions that affect the outcome of what content will be made available on a destination site.” Setting up dedicated moderators for accounts focused on publishing anonymous submissions is a feasible plan.

In addition, in the context of the case study, considering that users who submit anonymous submissions are usually university students, the university itself, as a community carrier, should also be involved in reviewing hate speech. According to Saha et al. (2019), “The efforts to regulate hateful speech on college campuses pose vexing socio-political problems.” As mentioned above, the University of Sydney welcomes diverse groups to join, and this kind of social media account where some hateful speech exists goes against their values. As an unofficial certified account, using the university’s name and the school badge for its profile can easily mislead others and negatively impact the school.

Conclusion

Social media platforms still have a long way to go in moderating hate speech; maintaining a healthy public discussion space is everyone’s responsibility. Among them, platform regulators’ responsibility is significant: reducing the amount of hate speech can help protect more groups from physical and psychological harm. Although artificial intelligence technology has become more and more advanced with the development of technology, social media platforms still need to continuously improve their policies and human efforts to identify and limit the spread of hate speech.

Whether the unit of the community is a university or the world at large, all existing groups should be understood and respected; only in this way can the diversity of human groups be maintained for a long time.

Reference list

Allan, R. (2017, June 27). Hard Questions: Who Should Decide What Is Hate Speech in an Online Global Community? Meta. https://about.fb.com/news/2017/06/hard-questions-hate-speech/

Barlow, J. P. (1996, February 8). A Declaration of the Independence of Cyberspace. Electronic Frontier Foundation. https://www.eff.org/cyberspace-independence

eSafetyCommisioner. (2019). Online hate speech. ESafety Commissioner. https://www.esafety.gov.au/research/online-hate-speech

Facebook. (n.d.). Facebook. Www.facebook.com. https://www.facebook.com/help/530628541788770

Flew, T. (2021). Hate Speech and Online Abuse. In Regulating Platforms (pp. 91–96). Polity.

Meta. (n.d.). Hate Speech | Transparency Center. Transparency.fb.com. https://transparency.fb.com/en-gb/policies/community-standards/hate-speech/

Natarajan, A., Moslimani, M., & Lopez, M. H. (2022, December 16). Key facts about recent trends in global migration. Pew Research Center. https://www.pewresearch.org/short-reads/2022/12/16/key-facts-about-recent-trends-in-global-migration/

Parekh, B. (2012). Is There a Case for Banning Hate Speech? In The Content and Context of Hate Speech : Rethinking Regulation and Responses (pp. 37–56). Cambridge University Press.

Roberts, S. T. (2019). Behind the Screen : Content Moderation in the Shadows of Social Media (pp. 33–72). Yale University Press.

Saha, K., Chandrasekharan, E., & De Choudhury, M. (2019). Prevalence and Psychological Effects of Hateful Speech in Online College Communities. Proceedings of the 10th ACM Conference on Web Science, 9781450362023, 255–264. https://doi.org/10.1145/3292522.3326032



Be the first to comment

Leave a Reply