In today’s increasingly rapid development of the Internet, traditional media is gradually being replaced. The increasing number of Internet users will lead to passive changes in traditional media, and traditional media have launched electronic versions. Even if traditional media evolves, it still cannot escape the result that its market share is gradually divided by Internet media, especially some big platform companies such as Facebook (Meta), Twitter and Google (Terry, 2021). These big platforms have a user base that includes almost all people who use the Internet, and they provide opportunities for people to search and build social networks, but the hate speech and online harms on these platforms is also something we need to pay attention to. According to the eSafety Commissioner (N.D.) 14% of Australian adults have experienced online hate speech in the 12 months from August 2019, and this is just the reported victims, perhaps this figure is even greater. The Internet is mobile and timely, so the user base is not limited, there are smart electronics can be used and can be used to publish anonymously without the need for real names. Because users are to a certain extent not responsible for their own statements, some users continue to make hate speech and online harms. Hate speech and online harms have become a means of soft harm, with some users venting their emotions and others inciting the rest of the user base to engage in aggressive behavior.
（Image from: ABC, 2020）
How does hate speech and online harms form and evolve?
Hate speech has always been a problem both historically and socially, and with the growth of the Internet hate speech groups have moved from offline to online, where it is convenient and safe for them. It is easy for them to become a user and make public statements on the Internet, and the emergence of radical hate speech can even bring people with similar views together to organize inflammatory events. Freedom of speech is promoted in Australia, and everyone is allowed to express their views, but some people use the guise of freedom of speech to express hate speech. In 2008, the fourth in a series of seminars on the 60th anniversary of the Universal Declaration of Human Rights (UDHR) discussed how to distinguish between free speech and hate speech, which is a difficult point to distinguish because people do have the right to express different opinions on the internet and these opinions cannot be defined as hate speech without directly causing some personal harm or social impact ( Australian Human Rights Commission, 2008). In the online environment, the topic of hate speech is becoming increasingly widespread. The “whining” and insulting language of boycotting outsiders, refugees, the poor, and discriminating against other racial, gender, and geographic groups may not seem to produce immediate harm, but it makes extreme hate crimes more insidious and disruptive to regulation. Online harms include race, gender, children, extremist groups and so on, among which online child exploitation and abuse and extremist agitation are of concern (Australian Government, N.D.). For example, extremists and terrorist groups will use the Internet to spread harmful information to raise funds to increase personal radicalism, etc. They will gain some attention by live streaming bloody and violent images such as killings, and some of the viewing users will therefore join them, which is a very dangerous thing.
How does hate speech and online harms affect us?
Hate speech and online harms can cause damage to the self-esteem and dignity of the attacked user, leaving the victim feeling helpless and desperate. This has a great impact on the physical and mental health of the victims, and some even develop serious psychological problems as a result. And if the network harms get out of control, it will pose a threat to social stability and security. Rumors that spread uncontrollably can trigger social panic and anxiety, and even cause serious damage to the social order.
The Internet is a place where everyone can say anything and send out pictures, but unfortunately, women are often more vulnerable to online attacks such as verbal violence, sexual harassment, stalking, rumors, etc. Victims’ Commissioner (2022) has received 534 responses from women about online harms such as online abuse and sexual harassment. Of the 534 women who responded to online harms such as online abuse and sexual harassment, 60% were abused online and 40% offline. Majority of these responses were intimate photo abuse and online harassment, with men misusing intimate photos without women’s permission leading to accusations and victimization of women, and up to two years of online stalking (Victims’ Commissioner, 2022). The existence of this phenomenon actually has something to do with the culture of the society, where there is always a stereotype that men are more powerful than women. People seem to be a little more inclusive of what men say online, and a little more demanding of what women post graphically online. Women are more likely to be insulted and attacked when they post their opinions or images on the Internet, and this restriction and confinement of women increases the likelihood that they will be victims of online violence. UNSW law journal (2018) has pointed out that Australian law does not protect women from gender hate speech attacks, and that many victims believe that speech that has been subjected to gender hate speech attacks is hardly defined as hate speech in Australian law and more often than not remains neutral. With a preference for free speech and the prohibition of gender hate speech, it is difficult for women victims to obtain legal assistance. It is difficult to define defamation and gender hate speech globally, not just in Australia, because everyone has the right to freedom of expression and there is no uniform definition of defamation and gender hate speech in every region. Sociocultural factors and different definitions of hate speech in each region make women more likely to be victims of online harms.
The topic of race has been a topic of discussion in recent years, and it is being discussed on various platforms, with Ariadna and Johan (2021) reporting that some users discuss the topic on social media platforms and that the topic is part of the politics of the platform. The topic of race has always been a sensitive but hot topic on big platforms such as Google, Facebook (Meta), Twitter, and Weibo, where most users are neutral and objective in their discussions, but some users are personal and make racist hate speech. These users may be racist in real life and use the Internet as a virtual platform to vent their emotions and stir up racial confrontation. For example, when COVID first broke out, there was a wave of racist comments about the “China virus” on the Internet, and the topic was at the top of the charts. The outbreak of the pandemic led to more serious discriminatory comments about Asians on the Internet, with many users expressing unabashed hatred of Asians on the Internet (Human Rights Watch, 2020). However, pandemics are naturally occurring and do not exist in any one country or region, and some Internet users have used the fact that China was the first country to discover COVID to attribute China as the country of origin of the virus. This is a highly personal form of racial hatred, which is not uncommon on the Internet, and some users make highly racist statements about white supremacy.
Big platforms are gradually becoming political platforms, and national leaders are starting to speak out through them and use their user traffic for political activities. The platforms also cooperate with these influential figures, and the regulation of their speech seems to be less strict. For example, Trump’s January 6, 2021 congressional event, in which a large number of Trump advocates broke into the Capitol for stealing data after the U.S. presidential election, and the biggest instigator of this violence was former U.S. President Donald Trump. Facebook (Meta), a large platform, allowed his incitement and did not ban Trump’s account after internally detecting that a large number of users were already discussing the topic, and did not even remove and clean up the series of topics generated by Trump’s incitement when protesters gathered in front of the Capitol in Washington, D.C. (Craig, Elizabeth & Reed. 2021). And in the end, the platform only shut down Trump’s account for two years, which was unblocked this year. The big platforms seem to be more lax in regulating the speech of politicians, and Facebook (Meta) has a strict set of regulatory standards for users to post their speech on the platform. Users are not allowed to post images that make other users feel uncomfortable, such as bloody pictures of insects, and are not allowed to post any discriminatory comments that would result in the closure of their accounts if identified by the system (Meta, N.D.). Some of Trump’s comments have actually indirectly violated the user posting rules by posting comments that are inflammatory in nature, but which Facebook (Meta) executives are allowed to post by default (Craig, Elizabeth & Reed. 2021).
- The United States
The U.S. First Amendment does not prevent private actors such as social media platforms from imposing their own restrictions on speech. Under Section 230 of the Communications Decency Act of 1996 (CDA), social media platforms are not considered publishers of content posted on their websites and are therefore further protected by private actions (Research Outreach, 2020).
- The United Kingdom
The Crime and Disorder Act, the Public Order Act, the Malicious Communications Act of 1998, and the Communications Act of 2003 prohibit derogatory statements based on race, national origin, religion, and sexual orientation (Research Outreach, 2020).
The U.S. and the U.K. have different standards for regulating and defining hate speech, and because each country has different controls on freedom of speech, it becomes more difficult to identify hate speech.
The Z-generation is gradually becoming cord-cutters or choosing to become cord-nevers, and the popularity of the Internet has made more and more people rely on getting information resources from the web. Many large platforms have been improving their regulatory systems in recent years, but small businesses are letting possible or potential hate speech or online harms related graphics exist in order not to be divided up among the few users by large platforms. In addition to small businesses’ desire to survive, some advertisers also place ads that may trigger hate speech or online harms to attract traffic. Hate speech and online harms may seem far away, but we have all been attacked by cyber bullets in an invisible way. Have you ever received a message from a stranger saying, “Can you come sleep at my place? It’s just that sometimes we don’t care, and maybe we’ve done the same thing unintentionally. Along with the improvement of the quality of Internet users more and more Internet users in the Internet to get information will judge the authenticity of information, in the message evaluation will be more moderate and neutral. At the same time, the network platform has the responsibility to come up with new means to deal with the new problems of online violence that are constantly emerging.
ABC Religion & Ethics. (2020, September 6). Dehumanising Muslims. ABC Religion & Ethics. Retrieved from:https://www.abc.net.au/religion/the-online-dehumanisation-of-muslims/12614148
Australian Government. (n.d.). Online harms & safety. Online Harms & Safety | Australia’s International Cyber and Critical Tech Engagement. Retrieved from: https://www.internationalcybertech.gov.au/our-work/security/online-harms-safety
Australian Human Rights Commission. (2008, November 18). Freedom of speech and race hate speech in Australia. The Australian Human Rights Commission. Retrieved from： https://humanrights.gov.au/our-work/freedom-speech-and-race-hate-speech-australia
D’Souza, T., Griffin, L., Shackleton, N., & Walt, D. (2018). HARMING WOMEN WITH WORDS: THE FAILURE OF AUSTRALIAN LAW TO PROHIBIT GENDERED HATE SPEECH. UNSW law journal. Retrieved from: https://www.unswlawjournal.unsw.edu.au/wp-content/uploads/2018/09/DSouza-et-al.pdf
eSafety Commissioner. (n.d.). Online hate speech. eSafety Commissioner. Retrieved from: https://www.esafety.gov.au/research/online-hate-speech
Flew, Terry (2021) Regulating Platforms. Cambridge: Polity, pp. 91-96.
Human Rights Watch. (2022, August 16). Covid-19 fueling Anti-Asian racism and xenophobia worldwide. Human Rights Watch. Retrieved from: https://www.hrw.org/news/2020/05/12/covid-19-fueling-anti-asian-racism-and-xenophobia-worldwide
Matamoros-Fernández, A., & Farkas, J. (2021). Racism, Hate Speech, and Social Media: A Systematic Review and Critique. SAGE. Retrieved from: https://journals.sagepub.com/doi/pdf/10.1177/1527476420982230
Meta. (n.d.). Hate speech. Transparency Center. Retrieved from: https://transparency.fb.com/en-gb/policies/community-standards/hate-speech/
Research Outreach. (2022, November 22). Hate speech regulation on social media: An intractable contemporary challenge. Research Outreach. Retrieved from: https://researchoutreach.org/articles/hate-speech-regulation-social-media-intractable-contemporary-challenge/
Timberg, C., Dwoskin, E., & Albergotti, R. (2021, October 29). Inside facebook, Jan. 6 violence fueled anger, regret over missed warning signs. The Washington Post. Retrieved from:https://www.washingtonpost.com/technology/2021/10/22/jan-6-capitol-riot-facebook/
Victims’ Commissioner. (2022, June 1). Impact of online abuse and harassment revealed in new research from the victims’ commissioner. Victims Commissioner. Retrieved from: https://victimscommissioner.org.uk/news/impact-of-online-abuse-and-harassment-revealed-in-new-research-from-the-victims-commissioner/