Online Hate Speech Doesn’t Just Stay Online

New media is invisibly permeating people’s daily lives due to the advancement of online information technology, making it possible for everyone to communicate freely on online platforms. This indicates that harmful speech, such as hate speech, can be published without restriction, and develop exponentially. As a result, the problem of hate speech has become a significant obstacle in the regulation and governance of the Internet today.

What is hate speech?

Hate speech is a type of speech that is intended to encourage, provoke, or incite discrimination and hatred against a group of people based on specific identity characteristics such as race, ethnicity, gender, religion, and disability, using hurtful expressions such as insults or defamation (Parekh, 2012). Terrorist and violent speech on social networking sites can not only cause immediate or long-term damage to the group but can also negatively affect social security. However, hate speech can vary based on social contexts, historical circumstances, and cultural norms. The lack of clarity surrounding the definition and regulation of hate speech is one of the most pressing issues of the present.

Figure 1.1: Hate speech by Andreas Töpfer

Many mass incidents in recent years have been linked to the ferment of hate speech, especially in the new media era, where the dangers of online hate speech have been further amplified through digital platforms and social media (Flew, 2021). Even though many social media platforms (e.g., Facebook and Twitter) have attempted to improve their content review policies, they are still unable to control the widespread spread of hate speech. This blog argues that the new media have made it easier for people to express themselves but have also exacerbated inequality. The permissive nature of free speech permits online hate speech to exacerbate group damage and further marginalise certain groups.  

Case study – Facebook promoted hate speech to dehumanise the Rohingya

According to the United Nations, the Myanmar military’s violent action against the Rohingya minority in 2016 and 2017 may constitute the gravest violations of international law. The social media platform Facebook was among those responsible for inciting atrocities against the Rohingya (Venier, 2019). Facebook is the only Internet platform accessible to the overwhelming majority of Rohingya, from which they can access and share the news. While the Rohingya have opportunities to contend for a voice, hate speech creates a harmful and detrimental social environment for members of vulnerable groups. This can make it difficult for disadvantaged groups to have an equal voice and status, as well as to counter the harmful effects of hate speech through new media.

Members of the Rohingya community sued and forcefully denounced Facebook’s algorithmic system for accelerating the spread of many hurtful, demeaning remarks about the Rohingya, such as calling them ‘lowly humans’ and ‘thorns that need to be removed.’ However, instead of removing violent content or accounts against the group, the platform allowed the dehumanising remarks to continue to spread. As a ‘global’ corporate culture, Facebook is overly dependent on opaque third-party outsourcing companies and its trusted partner organisations to censor hate speech (Sinpeng et al., 2021). If hate speech is not restricted by the legal regulation of the platform and the associated technical processing, it can reach a broader audience. Audiences who are exposed to hateful messages may subsequently spread them, causing ongoing damage to vulnerable groups. It can be inferred that the harm and aggression caused by online animosity will not diminish over time and may even resurface as a result of related events.

Figure 1.2: Facebook accused of promoting hate speech against Rohingya via YouTube

Governance dilemma

  • The definition and regulation of hate speech remains vague: As hate speech is in opposition to the doctrine of free speech, the right of the state to restrict hate speech and the extent to which such a restriction is reasonable have been the subject of lengthy and ongoing debate. The scale by which States define hate speech is considered to reflect the boundaries of free speech. However, hate speech is tolerated differently in different countries, cultures and social contexts, and there is yet no uniform definition. For example, on the Facebook social networking platform, different linguistic and cultural groups are prevalent. The platform acknowledges that it still faces technical difficulties and significant cost pressures in detecting and responding to hate speech content in a dynamic speech environment, multilingual and culturally diverse context (Sinpeng et al., 2021). At the same time, many social media companies, such as Facebook, have their roots in the United States and are influenced by American values of free speech, and it remains controversial whether online speech should be regulated. Companies face criticism and accusations of excessive power and illegal censorship of citizens’ free speech.

“The language and context dependent nature of hate speech is not effectively captured by Facebook’s classifiers……It requires local knowledge to identify, and consultation with target groups to understand the degree of harm they experience.”

Sinpeng et al., 2021
  • The diversity, anonymity and immediacy of hate speech posted in social media platforms: Today’s social media platforms give people more diverse ways to express themselves, giving everyone the ability to use images, audio, and video content in addition to text messages. Visual content, such as images and videos, is more attractive than text-only content, which significantly increases user engagement. However, this also means that hate speech in these forms spreads more rapidly; The anonymity of social media platforms reduces the constraints on individuals, and some people take advantage of this to participate in group activities and imitate the behaviour of others to attack disadvantaged groups, which undoubtedly leads to the spread of harmful speech such as hate speech making governance more difficult; In the digital age, people can simply press the send button on social media platforms to make their statements available to the public and spread widely, making it difficult for online hate speech to be regulated at the same rate as it spreads.
  • Conflict between platform interests and public interests: From the perspective of the platform’s business interests, its ultimate goal is to maximise profit. For example, in a 60 Minutes interview in the United States, former employee Frances Haugen exposed Facebook’s use of algorithms to amplify hate speech in order to generate traffic. Facebook has determined that the most effective method to increase user engagement is to instill fear and hatred in its users (Pelley, 2021). This demonstrates that online platforms seek to increase their financial gains through technological superiority while ignoring the ongoing damage to the public that occurs online.

Controversy and Bias in Artificial Intelligence Algorithms

As the amount of data on the Internet continues to grow daily, social media platforms must rely on AI recognition algorithms to assist with content review. Technology companies can save money, avoid risks, and demonstrate technology neutrality with the aid of recognition algorithms based on artificial intelligence. Nonetheless, this also poses several questions:

Can intelligent technology overcome the problem of recognising human language’s meaning? Is it more accurate to use humans?

Many critics argue that artificial intelligence is ineffective at identifying hate speech because the system lacks the affective and evaluative abilities of humans. The weakness of artificial intelligence is that it can only deal with the current patterns of hate speech that emerge and may not be able to deal with new patterns of hate speech that emerge in the future. For example, expressions of speech are now no longer limited to text, but combine emojis, images, audio, and video, and are uploaded, downloaded and distributed by people in multiple languages and forms. Facebook launched the Hateful Memes Challenge in 2020 in an effort to encourage the development of modelling systems that can detect the intent to harm others. However, even the most effective algorithmic systems struggled to outperform human inspection when analysing Facebook’s labeled dataset of 10,000 memes. This is due to the fact that the majority of machine learning systems are unable to comprehend the text in the images and the true meaning behind the composition of the images as effectively as humans (Wiggers, 2020).

At the same time, AI recognition is not yet keeping up with the speed at which the grammar and content structure of online hate speech is being updated. For example, the fact that Facebook did not immediately remove hate speech against the Rohingya people in Myanmar shows that the company does not have a detection algorithm for Burmese, which means that there are many smaller languages that cannot be detected by AI. Moreover, even if AI learns and comprehends language more profoundly, hate speech providers can avoid censorship by using subtexts that appear harmless but are actually hate speech.

Figure 1.3: Facebook’s hateful memes sample

What will it take to meet the challenge of governance issues?

In response to the government’s call for action, Internet platforms such as Facebook, YouTube, Twitter, etc. have started to combat online hate speech. However, Due to the fact that the perception of online hate speech varies from country to country and cannot be defined by a singular national standard, it is often challenging to review and manage hate speech content from different countries. Therefore, governments and civil society organisations need to be fully aware of local interpretations of hate speech and work with online technology companies to harmonise the definition of hate speech and its harmful effects so that excessive regulation does not infringe on people’s rights to privacy and freedom of expression. Governance is predicated on social media platform self-regulation, with government intervention. In addition, it is commonly believed that good governance depends on the openness of the regulatory process by which online technology companies censor hate speech (Braithwaite & Drahos, 2000). When users trigger the censorship mechanism without being informed why the content should be flagged for censorship, greater transparency fosters greater trust in the platform among users, who can post their speech in accordance with the platform’s transparent content regulation standards. Additionally, It enables users to participate in the content review together.

Due to the current limitations of AI recognition, much hateful content can still propagate rapidly through subtext and other forms of hate speech; therefore, it is impractical to restrict users from posting messages by setting sensitive words. If a large number of different users report and complain about a comment within a few days, consider using human review to determine whether the content constitutes hate speech, which is more effective and less labor-intensive than relying solely on AI.

Overall, While the new media have introduced convenience to people’s lives, digital technology has exponentially amplified the destructive nature of hate speech. The harm caused by hate speech on social networking platforms is not limited to the internet but can have lasting effects on the lives of marginalised and vulnerable individuals. Therefore, governments, civil society organisations, and online technology corporations must improve the technical means of censoring hate speech and come up with more diverse methods of regulating it.


References

Braithwaite, J., & Drahos, P. (2000). Global business regulation. Cambridge, UK: Cambridge University Press.

Flew, Terry (2021) Regulating Platforms. Cambridge: Polity, pp. 91-96.

Parekh, B. (2012). Is there a case for banning hate speech? In M. Herz and P. Molnar (eds), The Content and Context of Hate Speech: Rethinking Regulation and Responses (pp. 37–56). Cambridge: Cambridge University Press.

Pelley, S. (2021). Whistleblower: Facebook is misleading the public on progress against hate speech, violence, misinformation. CBS News. Retrieved April 16, 2023, from https://www.cbsnews.com/news/facebook-whistleblower-frances-haugen-misinformation-public-60-minutes-2021-10-03/

Sinpeng, A., Martin, F., Gelber, K., & Shields, K. (2021). Facebook: Regulating hate speech in the Asia Pacific. Final Report to Facebook under the auspices of its Content Policy Research on Social Media Platforms Award. Dept of Media and Communication, University of Sydney and School of Political Science and International Studies, University of Queensland.

Venier, S. (2019). The Role of Facebook in the Persecution of the Rohingya Minority in Myanmar: Issues of Accountability Under International Law. The Italian Yearbook of International Law, 2019(1), 231–248. https://doi.org/10.1163/22116133_02801014

Wiggers, K. (2020). Ai still struggles to recognize hateful memes, but it’s slowly improving. VentureBeat. Retrieved April 16, 2023, from https://venturebeat.com/ai/ai-still-struggles-to-recognize-hateful-memes-but-its-slowly-improving/

Be the first to comment

Leave a Reply