Hate Speech and Online Harm: A Growing Threat in the Digital Age

As an Internet user, have you seen, experienced, or are being affected by hate speech? As the internet and social media continue to expand their influence, the darker aspects of online communication are emerging to the surface. Users on the internet are gradually influenced by hate speech and online harm. Hate speech can be further developed from online space to the private space of users on the internet. The users could suffer from hate speech they received and result in an unstable mental health status or even commit suicide. These unwanted consequences, especially ones that can cause harm to users, can be considered as one form of online harm. This blog will debate the growing threat of hate speech and online harm in the digital age, discuss the current state of affairs and some of the latest developments, and propose some solutions to address the challenges that have arisen.

What is hate speech and online harm? What harm do they do?

Hate speech refers to forms of communication, including verbal, textual, or visual, that facilitates or incites hatred, discrimination, or even violence towards a particular group or individual based on their protected characteristics, e.g., race, religion, gender, nationality, etc(United Nations, n.d).

The concept of online harms is a broader range of negative consequences resulting from online interactions, and they can manifest in various ways. Some common forms of online harms can be:

1). Cyberbullying, which is delivered via the use of communication tools to harass, and/or threaten individuals;

2). Trolling, where individuals intentionally disrupt online conversations by posting offensive or irrelevant comments;

3). The spread of disinformation or misinformation, which can lead to the polarization of public opinion and the undermining of trust in institutions.

Another aspect of online harms involves the unintended sharing of explicit content, which can cause significant emotional distress and damage to the targeted individuals’ reputations. Moreover, the invasion of privacy, where someone’s personal information is published without their consent, is a form of online harms by leading to potential real-world harm. Online harassment can also target specific groups, such as racial, religious, or LGBTQ+ communities, exacerbating existing social inequalities and perpetuating discrimination.

Hate speech is a source of online harms and they both have the potential to cause significant psychological distress, social exclusion, and even physical harm to those targeted, making them pressing concerns for individuals, communities, institutions and governments. 

The current state of hate speech and online harms.

An recent report by Sinpeng et al. (2021) that showed hate speech on Facebook targeting ethnic and religious minorities is a big issue in the Asia Pacific area. Along with this, online problems like cyberbullying and harassment are more common now, especially for groups of people that are usually left out, as shown in a study by Carlson and Frazer (2018) on Indigenous Australians dealing with online harassment.

Even though digital platforms try to fix these problems, the rules they have now and how they enforce them don’t really keep users safe from hate speech and online harms (Flew, 2021). In many situations, platforms use AI and algorithms to check content and find harmful stuff, but these systems don’t always work well and can sometimes remove too much or not enough content (Roberts, 2019). Plus, it’s hard for users to understand how content is checked because of how unclear algorithm decisions are, which can make users not trust the platform’s rules (Crawford, 2021).

Governments are trying to make rules for digital platforms to handle hate speech and online harms better, like the Australian Online Safety Act 2021 and the UK Online Safety Bill. However, these laws have faced some criticism too, with people worrying about too many rules, stopping free speech, and whether platforms can really follow these new rules (Guardian, 2023).

The rules platforms have now don’t always do a good job of keeping users safe. The problems that users, platforms, and governments face when trying to fix these issues show that people need new ways and solutions to make the digital world a safer and more welcoming place for everyone.

Existing Cases

The rise of social media has made it possible for language to be communicated more quickly. This allows hate speech to be expressed online in a timely and anonymous manner. Online social media platforms such as Facebook, Twitter, and YouTube are often quick to respond to hate speech. Social media platforms have previously developed internal policies to regulate hate speech and have also signed a code of conduct agreement with the European Commission (European Commission, 2019). However, such decisions are made by companies, not by states, and are reactive and harmful. Because different groups have different definitions of hate speech, platforms have adopted multiple approaches to regulating hate speech.

The image illustrates the relationship between the Sender, Posts something, and the Recipient in Facebook’s regulatory process.

Facebook’s current HS regulation procedure(Ullmann & Tomalin, 2019)

Through user reports, Posts will conduct human reviews and evaluations of potentially harmful content, while Facebook also provides its own definition of hate speech. However, new challenges often arise during the audit process, so the definition is constantly revised, and this approach still relies on manual and human-centered inspections, which do not do a good job of protecting the group from harassment(Ullmann & Tomalin, 2019).

So Twitter found a way to build on that. Twitter has announced a ban on “dehumanising speech”.  Definitions:

Dehumanization: Language that treats others as less than human. Dehumanization can occur when others are denied of human qualities (animalistic dehumanization) or when others are denied of their human nature (mechanistic dehumanization). Examples can include comparing groups to animals and viruses (animalistic), or reducing groups to a tool for some other purpose (mechanistic).

(Gadde & Harvey, 2018)

Examples of dehumanization(Gadde & Harvey, 2018)

Identifiable group: Any group of people that can be distinguished by their shared characteristics such as their race, ethnicity, national origin, sexual orientation, gender, gender identity, religious affiliation, age, disability, serious disease, occupation, political beliefs, location, or social practices.

(Gadde & Harvey, 2018).

A new feature has also been added to Twitter that filters sensitive content and sends warnings to the intended recipients. Users can also mark their tweets as possibly containing sensitive content.

(From Twitter)

How to change? Education

Education and awareness of these concepts are crucial to help individuals use digital tools safely and responsibly. Programs, awareness campaigns, and educational resources should teach users how to identify and report harmful content, as well as how to navigate the digital environment safely and responsibly (Goggin et al., 2017). For instance, the e-Safety Commissioner in Australia offers various resources and educational materials to raise awareness about online safety and digital citizenship (e-Safety Commissioner). By promoting digital literacy and providing resources to identify and report harmful content, society can empower users to take control of their online experiences and contribute to a safer online environment.

Education can be a form of encouraging users to report harmful content and actively participate in creating a safer online environment. Encouraging positive online behaviors and facilitating a community common sense can help online community members collectively help each other mitigate the negative effects of hate speech and online harm (Carlson & Frazer, 2018).

By promoting respectful discussions and providing tools for users to engage in constructive conversations, the current internet environment can create a more inclusive and welcoming online space for everyone. This can involve promoting respectful discussions, highlighting positive content, and providing tools for users to engage in constructive conversations. For example, Facebook’s Community Standards aim to create a safe environment that promotes positive interactions among users (Facebook). By fostering a sense of shared responsibility among users, users can collectively work towards reducing the prevalence of hate speech and other damaging content on the internet.

Enhancing user control and customization

Platforms should provide users with more control over the content they see and interact with, allowing them to tailor their online experiences to their preferences and needs (Nissenbaum, 2018). This could include providing users with options to filter or block certain types of content, as well as giving them the ability to customize their feed based on their interests and values.

User control and customization are vital in addressing hate speech and online harm, but striking a balance between providing such options and implementing policies to protect users is equally important. In this context, the role of collaboration between various stakeholders becomes crucial.

To bridge the gap between user control and policy implementation, online platforms should adopt a proactive approach to ensure that user customization options do not inadvertently contribute to the proliferation of hate speech and online harm. This involves continuously monitoring and refining customization features to prevent malicious actors from exploiting them for harmful purposes (Nissenbaum, 2018; Flew, 2021). By fostering a collaborative environment among users, experts, marginalized communities, and regulatory bodies, online platforms can better understand and address the needs of their diverse user base while simultaneously enforcing policies that curb online harm (Sinpeng et al., 2021).

Clear guidelines and policies

Flew(2021) believes that platforms should regularly review and update their policies to better deal with these issues (Flew, 2021). The process should involve input from diverse stakeholders, including users, experts, and marginalized communities, to ensure that the policies reflect a wide range of perspectives and address the unique challenges faced by different groups (Sinpeng et al., 2021).

For example, platforms could learn from the Australian Online Safety Act 2021 and the UK Online Safety Bill in addressing harmful content and balancing freedom of speech (e-Safety Commissioner). The concept of a statutory duty of care, as discussed by Woods and Perrin (2021), could provide a framework for platforms to accept responsibility for the content they host and their users’ safety. Such a collaborative approach allows stakeholders to be involved in the decision-making process, which will lead to a more effective and sustainable Internet.

The concept of a statutory duty of care can provide a robust foundation for platforms to manage their content and user safety, but it is also essential to acknowledge the role of technology in this process.

The AI development and application and machine learning algorithms in content moderation are taken as effective tools that help identify and remove harmful content. However, it’s essential to ensure that the applied technologies do not reinforce existing biases or suppress legitimate speech, highlighting the need for ongoing evaluation and refinement of these systems.

AI

Increased transparency regarding AI and algorithmic systems can help users navigate their online experiences more effectively, but it is only one aspect of addressing hate speech and online harm. However, different opinions are stressed on this perspective articulating that platforms should be careful while using such a technology in the value-compound application. The next essential step in creating a safe and inclusive internet is understanding the cultural and regional differences that shape the unique challenges faced by different communities.

Platforms should work on enhancing the transparency of their algorithms and AI systems (Crawford, 2021). This could involve disclosing how their algorithms work, allowing users to adjust their content preferences, and providing clearer explanations about content moderation decisions (Roberts, 2019). Increased transparency can help users understand how the platform’s systems work and make more informed choices about their online experiences.

By acknowledging that one-size-fits-all solutions may not be suitable for addressing diverse issues, it becomes evident that regulations and platform policies should consider cultural sensitivity and inclusivity. This can be achieved by involving a wide range of perspectives and voices in the decision-making process, which will ultimately lead to more robust and effective strategies to combat hate speech and online harm, tailored to the specific needs and challenges of each community.

Countering hate speech and online harm requires efforts from all parties.

In conclusion, platforms need stricter rules and more enforcement to make sure that online spaces are safe and welcoming for everyone. This means that platforms need to be proactive about finding and getting rid of harmful content while keeping a balance between protecting users and maintaining freedom of speech.

Increased AI and algorithmic transparency foster trust between users and platforms. By making their content moderation and algorithmic decision-making processes more clear, platforms can give users the power to make smart choices about their online experiences. Digital literacy and awareness programs can also help people find their way around the complicated online world. Online community building and positive engagement can make the Internet a more welcoming and supportive place.

Furthermore, collaborative regulation and governance efforts involving governments, platforms, users, and marginalized communities can lead to more effective and inclusive policies. Individuals will also be able to tailor their online experiences to their tastes, making them feel like they have control over the content they see and how they interact with it.

When trying to regulate online spaces, governments, platforms, users, and people from marginalized groups should all be involved to make sure that policies and solutions work for everyone.

Reference List

Carlson, B., & Frazer, R. (2018). Social media mob: Being indigenous online. Macquarie University.

Crawford, K. (2021). The atlas of AI: Power, politics, and the planetary costs of artificial intelligence. Yale University Press.

European Comission. (n.d.). Countering illegal hate speech online #noplace4hate. Retrieved April 10, 2023, from: https://ec.europa.eu/newsroom/just/items/54300

Flew, T. (2021). Regulating platforms. John Wiley & Sons.

Gadde,V. & Harvey, D. (2018). Creating new policies together. Twitter. Retrieved April 11, 2023, from https://blog.twitter.com/official/en_us/topics/company/2018/Creating-new-policies-together.html

Goggin, G., Vromen, A., Weatherall, K. G., Martin, F., Webb, A., Sunman, L., & Bailo, F. (2017). Digital rights in Australia. Digital Rights in Australia (2017) ISBN-13, 978-0.  http://hdl.handle.net/2123/17587

Karppinen, K. (2017). Human rights and the digital. The Routledge Companion to Media and Human Rights. https://doi.org/10.4324/9781315619835-9

Massanari, A. (2017). # Gamergate and The Fappening: How Reddit’s algorithm, governance, and culture support toxic technocultures.New media & society, 19(3), 329-346. https://doi.org/10.1177/1461444815608807

Matamoros-Fernández, A. (2017). Platformed racism: The mediation and circulation of an Australian race-based controversy on Twitter, Facebook and YouTube. Information, Communication & Society, 20(6), 930-946 . https://doi.org/10.1080/1369118X.2017.1293130

Milmo, D. (2023, January). TechScape: Finally, the UK’s online safety bill gets its day in parliament – here’s what you need to know. The Guardian. Retrieved from: https://www.theguardian.com/technology/2023/jan/17/online-safety-bill-meta-pinterest-snap-molly-russell

Nissenbaum, H. (2018). Respecting context to protect privacy: Why meaning matters. Science and Engineering Ethics, 24(3), 831-852. https://doi.org/10.1007/s11948-017-9971-y

Noble, S. U. (2018). Algorithms of oppression: How search engines reinforce racism. New York University Press.

Pasquale, F. (2015). The black box society: The secret algorithms that control money and inormation. Harvard University Press.

Roberts, S. T. (2019). Behind the screen: Content moderation in the shadows of social media. Yale University Press.

Sinpeng, A., Martin, F. R., Gelber, K., & Shields, K. (2021). Facebook: Regulating Hate Speech in the Asia Pacific. Facebook Content Policy Research on Social Media Award: Regulating Hate Speech in the Asia Pacific. https://r2pasiapacific.org/files/7099/2021_Facebook_hate_speech_Asia_report.pdf

Suzor, N. P. (2019). Lawless: The secret rules that govern our lives. Cambridge University Press.

Ullmann, S., & Tomalin, M. (2019). Quarantining online hate speech: Technical and ethical perspectives. Ethics and Information Technology, 22(1), 69–80. https://doi.org/10.1007/s10676-019-09516-z

United Nations. (n.d.). What is hate speech? United Nations. Retrieved April 10, 2023, from https://www.un.org/en/hate-speech/understanding-hate-speech/what-is-hate-speech?gclid=EAIaIQobChMInYGB_OWc_gIVOdxMAh1jXAm_EAAYASAAEgJ4hvD_BwE

Be the first to comment

Leave a Reply