Yuqing Lei / ylei6510
With the acceleration of globalisation and the rapid development of the Internet, the era of social media has arrived. The freedom of expression on the internet nowadays has led to the definition of hate speech remaining vague and its regulation challenging and difficult, and the issue of hate speech on the internet has become one of the most prominent realities currently erupting worldwide. Hate groups have recently become more prevalent and their numbers are steadily growing. Hate speech is defined as the incitement or expression of hatred towards a group of people or certain characteristics such as “race, ethnicity, gender, religion, nationality and sexual orientation” (Flew, 2021, p. 115) Hate speech is widely considered to be an expression of harm in a comparable way to more obvious physical harm. Lawrence, for example, argues that what he calls ‘offensive racist speech’ is ‘like a slap in the face’ and is experienced as a ‘blow’ which, once delivered, reduces the likelihood of dialogue and therefore participation in freedom of expression. (Lawrence Charles, 1993, p. 68) Brison, on the other hand, argues that ‘verbal attacks’ can cause sustained, long-term ‘traumatic’ long-term damage (Brison, 1998, p.42 – 44).
We use these academic terms to understand hate speech. That is, we do not understand it as merely offensive remarks about someone, or hurting their feelings, but hate speech can cause harm either as immediate harm or over time. With the acceleration of globalisation and the rapid development of the Internet, the era of social media has arrived. The freedom of expression on the internet nowadays has led to the definition of hate speech remaining vague and its regulation challenging and difficult, and the issue of hate speech on the internet has become one of the most prominent realities currently erupting worldwide.
Thus, hate speech is defined as inciting hatred against individuals or special groups based on race, religion or orientation, etc., through expressions such as insulting defamation and creating a hostile environment.
Hate speech expands into a global problem
With the development of the Internet, artificial intelligence and big data, people are constantly in the flow of information, and all kinds of information and speech are growing exponentially. Hate speech on online social media platforms has gradually attracted the attention of the legal profession, but this research has mostly been confined to the discussion of the boundaries of freedom of expression and has not yet met the need to regulate hate speech, which is precisely an unavoidable and important issue. Nowadays, the Internet is full of hate speech of all kinds, and people with ulterior motives take advantage of the large number of Internet users and their youth to spread extreme ethnic, racial, political and individual hatred on the Internet.
Firstly, hate speech on the Internet is wrapped in the deep sea of anonymous networks, quietly crossing national borders and language barriers, and deeply rooted in the global dissemination of hate speech in obscure ways such as pictures, symbols, and audio and video. Second, the globalization of the Internet has made it easier for hate speech organizations to evade local legal regulation, which has led to the emergence of a large number of hate speech websites in just a few years. According to The Simon Wiesenthal Center, 20 years ago, there were more than 2,300 hate websites around the world, a third of which were extremist websites founded by Europeans, all hosted on servers in the United States to evade European anti-hate laws. In the following decade, violent violence and hate speech were promoted
Websites and social media platforms alone exceeded tens of thousands. Finally, due to global political instability, frequent social movements and the spread of COVID-19 in recent years, global hate speech has attracted a new peak. According to Facebook’s statistics, anti-Asian speech, hate speech and racial discrimination related to COVID-19 have intensified. Facebook’s fourth-quarter statistics for 2020 also showed a frequency of 0.8% of hate speech on its platform.
It can be argued that hate speech provokes prejudice and discrimination, creates a divide between social groups and is an attack on inclusion and diversity. The aggregated nature of new media undoubtedly exacerbates discrimination, affects the social inclusion of some groups and infringes on the public’s right to equality. The propagation of prejudice by social media can easily lead to the invisibility of discrimination and the complexity of legal regulation, which gives reason to fear that hate speech in the context of new media can trigger greater inequalities, harm specific groups or even provoke real violence.
Hate speech leads to vicious incidents.
In the aftermath of the Covid-19 epidemic, the United States became more divided, racist and xenophobic, and anti-Asian hate crimes increased dramatically, with six Asian women killed in a vicious shooting in Atlanta on March 16, 2021. In May 2021, US President Joe Biden signed the ‘COVID-19 Hate Crimes Act’, which aims to combat the high incidence of hate crimes against Asians in the US since the epidemic. According to statistics, since the Covid-19 epidemic, there has been a dramatic increase in vicious speech on the Internet and a significant increase in violent crime and hate crime. The dangers of racism, which is spreading like a ‘virus’, are deep-rooted.
(Figure1: People rally to protest against discriminatory acts and hate crimes against Asians in New York, USA, on 21 March 2021.)
Practical strategies of major global social media in the face of the prevalence of hate speech.
The policies of social platforms such as YouTube and Reddit on hate speech are closely related to their Internet business image. Even though social platforms like Facebook and Twitter have invested a lot of resources and budgets to control the health of speech, they still face criticism from the public and are under great pressure.
There have been many critics of YouTube’s inactive actions at the level of speech regulation. It is often seen as having changed from a video-sharing site, news and entertainment site to a platform for people to spread violent information, extreme views and misinformation. In the midst of this public opinion storm, YouTube introduced more than 30 new regulations in 2018 to set the record straight. But in a short period of time, it has seen a 46-fold increase in the amount of hate speech compared to the past, which has led to the platform having to shut down many channels dealing with hate speech. Since then, despite the gradual improvement of YouTube’s policy system on hate speech, the platform has still not been able to filter a huge amount of hate speech in time. The public research report believes that the YouTube platform in many ways encourages the spread of violent or hate content, and the platform’s own technology is not enough to support their timely identification of dangerous or extremist content, which is inefficient in purifying information, which also leads to a lot of public pressure.
(Figure2/3: Youtube’s regulatory policy against the phenomenon of hate speech.)
The social news community Reddit adopted a new content policy in 2015 and banned several overtly racist boards. But Reddit has long been criticised for not doing enough to get rid of racist content, and has been called the home of the “most violent racist” content on the internet.
It has been called the home of the “most violent racist” content on the internet, and has shown “how scary the darkest corners of the web can be”. Even Reddit’s former CEO publicly condemned the platform for “nurturing and monetising white supremacy and hate all day long”. 2020 saw hundreds of Reddit moderators send an open letter to Reddit’s board and CEO demanding changes to the platform’s hate speech policy. This came at a time when the ‘Black Lives Matter’ protests were intensifying and Reddit co-founder Alexis Ohanian announced his departure in response to the protests and his desire to hand over his position to a black man. The CEO also admitted that the rule had been deliberately made implicit for a long time, which had “caused all sorts of confusion and problems”. “This move means that Reddit is now a major player on the platform. The move represents a major shift in Reddit’s policy on speech on the platform.
(Figure3: This is a poster of a user’s satire of the Reddit platform’s ineffective removal of racist content.)
At this point, we can summarise the realities faced by technology companies. Firstly, because it cannot be defined by a single national definition, companies must redefine and rewrite online hate speech in order to achieve a technically actionable and enforceable ‘precise definition’ that improves algorithmic precision, delineates the scope of identification and reduces disputes and controversy. Secondly, the ability to respond to online hate speech has been directly linked to corporate image, political and commercial risk. Failure or inaction in regulating speech can lead to the collapse of a company’s image. But companies’ own calculations of costs and benefits, and the strength of their algorithm development, influence their strategies for action. Thirdly, technology companies face external pressure from the state and political organisations, which makes it likely that there will be constant friction between the state, the public and technology companies. Fourth, major internet companies have their roots in the US and are influenced by American values of free speech. Whether or not they should regulate online speech is still under debate, and companies face criticism and accusations of excessive power and illegal censorship of citizens’ free speech.
In short, as some media reports have put it, technology companies are at a ‘crossroads’ and are in a ‘prisoner’s dilemma’. Faced with cultural and legal differences in the scale of hate speech from one country to another, they struggle to present themselves as politically neutral and technology service providers in order to cope with the enormous political risks and pressure from advertisers.
The media is a party to the competition for profit in the market, and it is in its nature to pursue profit. The network operators of the new media have mastered the infrastructure and technology in communication, pushing information through algorithms with precision to expand social influence and create a greater range of discourse control. The network platforms seek more economic benefits and further growth through their all-round advantages in technology. The interaction between the power of the state, the private power of the platform companies and the power of the media in this respect makes us worry that the right to equality in the face of the enormous power of the state will not be able to cope with the weight of multiple powers.
New media are quietly affecting our lives with the process of global progress, changing our lifestyles and thoughts, while the harm caused by hate speech has been multiplied with the technological advancement of digital platforms. Hate speech has gained a wider audience across a range of internet platforms than ever before, eroding the public’s right to equality and the physical and mental health of specific groups. Therefore, the public should be vigilant against the negative consequences of new media in a timely manner, and deeply reflect on the violence and harm caused by hate speech and the social crisis. This also warns that Internet technology companies and new media enterprises should pay attention to equal values and the health of the social environment system while pursuing commercial interests, which of course also requires the cooperation of all disciplines to create a situation in which the humanities and natural sciences jointly supervise and govern.
Flew, T. (2021). Hate Speech and Online Abuse. Regulating Platforms (pp.115). Polity. https://bookshelf.vitalsource.com/books/9781509537099.
Lawrence, C. (1993). In Words That Wound: Critical Race Theory, Assaultive Speech, and the First Amendment, Edited by Mary J. Matsuda, Charles R. Lawrence III, Richard Delgado and Kimberlé Williams Crenshaw. If He Hollers Let Him Go: Regulating Racist Speech on Campus., Boulder: Westview Press., 53–88.
Brison, S. (1998). In Words That Wound: Critical Race Theory, Assaultive Speech, and the First Amendment, Edited by Mary J. Matsuda, Charles R. Lawrence III, Richard Delgado and Kimberlé Williams Crenshaw. Speech, Harm, and the MindBody Problem in First Amendment Jurisprudence., iss. 1(Legal Theory 4), 39–61.
Simon Wiesenthal Center. (2014, May 1). District Attorney Vance and Rabbi Abraham Cooper Announce the Simon Wiesenthal Center’s Report on Digital Terrorism and Hate. https: / /www.wiesenthal.com/about /news /district-attorney-vance-and.html
Facebook. (2021, February 11). Community Standards Enforcement Report.
Facebook. (2021, March 17). Community Standards. https://www.facebook.com/communitystandards/recentupdates/hate_speech/, 2021-03-17.
The youtube team. (2019, June 5). Community Standards. Our Ongoing Work to Tackle Hate. https: //blog.youtube/news-and-events/our-ongoing-work-to-tackle-hate/
Weight johanna. (2020, December 3). Updates on Our Efforts to Make YouTube A More Inclusive Platform. https: / /blog.youtube/news-and-events/make-youtube-more-inclusive-platform/
Professor juanita sherwood . (2020, June 29). Black Lives Matter at Charles Sturt University. Charles Sturt University. https://news.csu.edu.au/opinion/black-lives-matter-at-charles-sturt-university
Brook, J. (2012, August 1). Multiple and Severe Hate Speech on YouTube. Online Hate Prevention Institute. https://ohpi.org.au/multiple-and-severe-hate-speech-on-youtube/