(Student ID: 530349651, Name: Yue Qi)
Introduction
Do you know what’s hate speech? We can find a broad definition of hate speech in the policy recommendations published in 2016 by the European Commission Against Racism and Intolerance (ECRI), which covers its most distinctive forms:hate speech is related to the use of one or more than one specific phraseologies, such as the promotion, advocacy or incitement of the denigration, hatred, or vilification of an individual or a group, as well as any intimidation, stereotyping, insult, negative stigmatization or threat of such an individual or group of people, and any justification of all these forms of expression, based on a list that is non-exhaustive of characteristics or status of a person or a group, including race, sexual orientation, age, religion or belief, language, sex, gender identity, color, country or ethnic origin, disability and descent (ECRI, 2016, p. 16).
Online hate speech can bring great harm to people. According to Waldron (2012), both indirect and direct damage can result from online hate speech. Possible moral and legal concerns about whether the spread of hate speech should be tolerated in society are raised by indirect social damage. Directly harmful hate speech could have negative social and psychological repercussions on individuals and groups. For instance, exposure to online hate speech may have long-term impacts that reinforce discrimination against vulnerable groups. Because of this, victims might develop prolonged defensive and hyper-vigilant behaviours that could be risky (Leets, 2002).
According to an article (Sinpeng et al., 2021), hate speech is said to be harmful in causally way and constitutively way. Causal harms are those that result directly from expressing hate speech, which might include inciting individuals to commit specific acts of prejudice against members of the target group or, at the extreme, committing specific acts of violence against those targets. The harm directly caused by expressing hate speech causes causal harm, which may include inciting individuals to commit specific discriminatory acts on members of targeted groups or, in extreme cases, to commit specific violent acts on those targets. Constitutive harms come from the spoken words themselves, which means the statement is considered harmful in and of itself. Members of the target group may be insulted, persecuted, and categorized as inferior, subjected to subordination, and prejudice against them may be justified, as examples of constitutive harm.
Who is responsible for hate speech?
Unfortunately, hate speeches are nothing new in society. And social media and other online communication platforms have begun to play a bigger role in hate crimes. A study of Müller and Schwarz (2020) proposed that “anti-refugee sentiment can be reinforced by social medias, which can push some potential perpetrators to cross moral boundaries and commit acts of violence”. Another research also suggested that crimes against minority groups in the offline world have increased because of the growing number of online hate speech in the UK (Cardiff University, 2019). That is to say, the fundamental algorithms used by leading social media platforms have the potential to turn potential perpetrators into real perpetrators of both online and offline criminal activities and expressions of hatred.
In 2018, a contributor involved in the creation of YouTube’s recommendation algorithm discovered that videos with extremist political views are often recommended to users who watch more conventional news sources, and that’s because of the YouTube’s algorithm (Tufekci, 2018). This suggests that social media algorithms have become more than a catalyst for online and offline hate speech and crime, but a tool for radicalization.
Based on the above research results, platforms should be held responsible for hate speech. But only platforms are not enough to regulate hate speech, other institutions should also be responsible for that: the public institutions that develop domestic public diplomacy policy measures. These two institutions should take collaborative efforts to reduce online hate speech (Doncel et al., 2023). This blog will analyze this critically in combination with Facebook’s practice of regulating hate speech in the Asia-Pacific region.
What can social media companies do to fight hate speech?
According to Doncel et al. (2023), tolerance is needed in the management of hate speech, as deleting posts that contain hate speech can be counter-productive and escalate conflict. Studies have suggested that the way to combat prejudiced and detrimental viewpoints is more words, and censorship is not what should be done (Cohen-Almagor, 2016). Because of the importance of words, culture wars and ignorance have a significant impact in the problem of hate speech and can be said to be one of the root causes. Many efforts have been made by different states and supra-state actors to eliminate this root cause. Education is vital for people of early age, but for the adult public, it is more useful to develop alternative narrative strategies to counter hate speech. Dialogue, mobilization of civil society, or concrete specific action around hate speech can be used as ways to address hate speech (Gagliardone et al., 2015).
Three tools have been found by research as being effective in combating hate speech (Doncel et al., 2023). The first tool includes transparency and platform-level moderation of hate speech content. These two are included in the corporation social responsibility. Transparency includes disclosing policies and practices for dealing with hate speech, including removing content and suspending users. Meaningful transparency should include a communication process explained to numerous stakeholders and require a deeper understanding of the deeper complexities of the process of moderating content.The second tool is the regulation of hate speech, which could be which can be a sensitive issue due to potential conflicts with free speech. The third tool is promoting critical citizenship, which must address the demands of an increasingly diverse global context where different gender, ethnic groups, sexual orientations and cultures coexist. This tool falls under the purview of public diplomacy and is aimed at adult audiences (Doncel et al., 2023).
In the case of Facebook regulating hate speech in the Asia-Pacific region, we can see the use of these three tools. According to Sinpeng et al. (2021), Public and Content Policy, Global Operations, and Engineering and Product are the three sections of the organisation that are included in Facebook’s internal process for reviewing content. (see figure 1). These three sections collaborate with each other and the review process relies on various resources to detect hate speech, such as letting platform users to voluntarily report or or “flag” content they think violates the rules, outsourced moderation services provided by external providers, including third-party companies, and the cooperation with reliable civil society partners who report content violations, and offer information and advice to policy staff about trends related to violating content.
Figure 1. Facebook content regulation ecosystem. Source: Sinpeng et al., 2021, p.18.
While Facebook’s organisational structure, technologies, strategies and policies operate are predominantly geared towards a global audience, the platform’s approach to identify and moderate hate speech, as well as its activities about education and advocacy are becoming more informed by local contexts. Experts in Asia-Pacific region, market specialists, content reviewers that are outsourced and trusted partner organisations are the information transmitters of that procedure, they transfer local information and conditions to Facebook. A “glocalization” corporate culture that works with global goals, policies and objectives can be found in Facebook, but that kind of culture is constantly affected and adjusted by Facebook investments in specific countries and areas and how it affects those societies (Sinpeng et al., 2021).
Use public diplomacy to fight hate speech
“Public diplomacy” is defined as the tools used by countries, coalitions of states, and some substate and non-state actors to understand culture, attitudes, and behaviour. Relationships can be built and managed by it; and people’s beliefs can be influenced, and actions can be mobilized, so that people’s values and interests can be advanced (Gregory, 2011). Hate speech is essentially an attack on an individual or a group based on who they are and their membership. This is completely opposed to public diplomacy’s definition, which can therefore be an effective tool in combating hate speech.
What happens to hate speech in a society without public diplomacy? What would happen if the institutions that allow us to know the literature, tradition, art, or thought from the world outside ourselves no longer existed? According to Doncel et al. (2023), through public diplomacy, the young generation can better understand other countries’ culture, life style, thinking, political movements and so on. Public diplomacy also enables the young generation to better understand the positions of their own countries and other countries on key issues. Therefore, if there is no public diplomacy agency, then hate speech will be amplified, causing even worse effects. In addition, the “civic commitment” that could be translated into public diplomacy to create counter-narratives to counteract hate speech would be nearly impossible to fulfill. According to Doncel et al. (2023), countering hate speech and getting citizens to adopt more reality-based narratives requires: (a) combining history to explain contemporary issues and drawing lessons from history, no matter how often it is used or cliched, realistic or grossly unrealistic; (b) causal analysis should be based on ideas, values and moral principles, in turn, in order to avoid objectifying insults; (c) solutions should be symbolic and theatrical; sustainable agreements or alliances are not desirable.
According to Sinpeng et al. (2021), governing hate speech in the Asia-Pacific region should not only consider the different constitutions of each country in the Asia-Pacific region, but also consider the local cultural, religious, historical traditions and ethnic differences, and try to use eclectic laws that can reflect this diversity. And connect with local civil society and minority groups to inspect hate speech and compile lists of hate speech that, occasionally, can lead to violence, incitement discrimination and hostility. These measures can be said to reflect some of the above recommendations.
Conclusion
Combining these studies (Doncel et al., 2023) with Facebook’s efforts to regulate hate speech in the Asia-Pacific region (Sinpeng et al., 2021), here are some things to consider when fighting online hate speech:
First, a fundamental condition for any strategy to combat hate speech online is to let public institutions cooperate with the companies themselves. The closer the collaboration between the two, the more effective initiatives to combat hate speech will be.
Second, it is important to incorporate hate speech moderation into the CSR strategies of social media companies themselves. Because it allows social media companies’ actions against hate speech to be seen as a concern for the proper use of social media by society and the public. In addition, social media companies will also be aware of the danger of hate speech and take the initiative to censor the content. There is no need for legal or criminal measures to force companies to censor.
In addition, it is important for the state to participate in the drafting and implementation of guidelines on hate speech. Through these guidelines, social media companies could standardize content moderation for hate speech. But national legislation is not unilateral. In order to establish specific, clear and effective parameters, countries should work with social media companies so that the site’s measures against hate speech do not interfere with the user experience.
The contextual dependence of hate speech makes local knowledge necessary to fully understand it, including local definitions of hate speech, legislation that captures it, the potential for free speech to be overstepped by government legislation, and how to deepen cooperation and partnership with local communities. To capture true hate speech, platforms should continue to redefine hate speech without compromising users’ freedom of speech.
A regional hate-speech monitoring program run by governments, civil society organisations and platforms working together, as designed by the European Commission, would be a huge boost to the local management of hate speech. This will help to reach an agreed definition of hate speech and its harm, improve remedies and reporting, and curb hate speech to spread in that region.
References
Cardiff University (2019), Increase in Online Hate Speech Leads to More Crimes against Minorities. Phys.org, available at: https://phys.org/news/2019-10-online-speech-crimes-minorities.html (accessed 15 March 2022).
Cohen-Almagor, R. (2016), Facebook and Holocaust Denial. Justice, Vol. 57, pp. 10-16.
Doncel-Martín, I., Catalan-Matamoros, D., & Elías, C. (2023). Corporate social responsibility and public diplomacy as formulas to reduce hate speech on social media in the fake news era. Corporate Communications, 28(2), 340–352. https://doi.org/10.1108/CCIJ-04-2022-0040
European Commission against Racism and Intolerance (ECRI). (2016), ECRI General Policy Recommendation No.15 on Combating Hate Speech. Strasbourg: Council of Europe. Retrieved from: www.coe.int/t/dghl/monitoring/ecri/activities/GPR/EN/Recommendation_N15/REC-15-2016-015-ENG.pdf.
Gagliardone, I., Gal, D., Thiago, A. and Gabriela, M. (2015), Countering Online Hate Speech. Programme in Comparative Media Law and Policy, University of Oxford, Oxford.
Gregory, B. (2011). American Public Diplomacy: Enduring Characteristics, Elusive Transformation. The Hague Journal of Diplomacy, 2011(3-4), 351–372. https://doi.org/10.1163/187119111X583941
Leets, L. (2002). Experiencing hate speech: Perceptions and responses to anti-Semitism and antigay speech. Journal of Social Issues, 58(2), 341–361. DOI: 10.1111/1540-4560.00264.
Müller, K., & Schwarz, C. (2021). Fanning the Flames of Hate: Social Media and Hate Crime. Journal of the European Economic Association, 19(4), 2131–2167. https://doi.org/10.1093/jeea/jvaa045
Sinpeng, A., Martin, F. R., Gelber, K., & Shields, K. (2021). Facebook: Regulating Hate Speech in the Asia Pacific. Department of Media and Communications, The University of Sydney.
Tufekci, Z. (2018). YouTube, the Great Radicalizer. New York Times (Online).
Waldron, J. (2012). The Harm in the Hate Speech. Cambridge, MA: Harvard University Press.
Be the first to comment