
To ensure the efficacy and fairness of AI technologies, ethical concerns must be the starting point for navigating the intersection of AI, hate speech, and digital rights.

Introduction
Online hate speech is becoming a bigger problem because it can hurt people and have bad long-term effects on many different groups and individuals. In response, the use of artificial intelligence (AI) in combating hate speech is on the rise. We must recognise the important role that ethics plays in navigating the intersection of AI, hate speech, and digital rights. It is essential to think about ethics when making AI tools to fight hate speech. This blog post begins with issues of hate speech and AI technology and finally examines ethical AI ideas to combat hate speech and protect digital rights.
First, what is “hate speech”?
“We are creating a world where anyone, anywhere, may express his or her beliefs, no matter how singular, without fear of being coerced into silence or conformity.” EFF published John’s Declaration of Independence for Cyberspace (EFF). This is a strong idea for how free speech should work online. Global Digital Insights (2023) reports that 64.4% of the world’s population is online. This makes the Internet a community where individuals speak different languages, have different opinions, and share their thoughts. However, the dangers of the Internet change as it develops.

Source from: Behance
Hate speech is language that makes people feel bad about a certain group of people because of something that makes them different, like their race, ethnicity, gender, religion, country, or sexual orientation (Parekh, 2012). Tirrell (2017) went even further and called hate speech “toxic speech,” emphasising that its effects build slowly over time, like a slow-acting poison. We can learn that hate speech is a big problem that can hurt people and groups for a long time. Hate speech on the internet is becoming more common all over the world, and it often comes with false information and extreme political material (Brooking and Singh, 2018).
How does hate speech strike like poison?

Source from: BBC News
In March 2021, 19-year-old Bristol resident Phoebe Jameson talked to Newsbeat about the online abuse she had received (Baggs, 2021). She said that she was bullied every day in 2020, including getting 100 scary death threats online daily. All of this started when, for International Women’s Day in March of last year, she shared a picture that was positive about her body. The picture got a lot of comments about how she appeared. She tried to kill herself in December because the constant negative comments had badly hurt her physical and mental health. Even worse, when she told the local police about the death threats, they just told her to stop using social media and told her that they would only help in the most extreme instances of online abuse. Despite this, Phoebe thought it was her choice whether or not to leave social media, and she didn’t want to be pushed out. Her experience shows how dangerous hate speech can be online and how important it is to provide people who are cyberbullied with better support systems. Hate speech can be extremely harmful for a person’s mental health. Because of this, it’s important to fight this problem in an effective manner.
The intersection of AI and hate speech
According to Sinpeng (2021), how to successfully address online hate speech has been at the centre of public concerns over digital content regulation. To control the content that is posted online and make sure that harmful content is taken down, we need a complete regulatory framework. Moderation of content has become a major concern. Social media platforms have to balance freedom of expression with stopping the spread of harmful content.
The auditing of content on social media platforms is a complex process, and moderators will face mental challenges. They had to look at a lot of posts, videos, and pictures that were scary, violent, or offensive (Suzor, 2019). The process is artificial and error-prone. Also, users always complain that rules aren’t followed in a fair way. So, what can we do to ensure that content moderation is done in a legal and fair way?
“Just as systems that perpetuate oppression can be rejected, As conditions on Earth change, calls for data protection, labour rights, climate justice, and racial equity should be heard together. When these interconnected movements for justice inform how we understand artificial intelligence,”
Kate Crawford, writer of The Atlas of AI: Power, Politics, and the Planetary of Artificial Intelligence
Artificial intelligence (AI) is getting better quickly and is very useful in many fields. Several AI models have gone through a lot of work to stop hate speech on online platforms. Several stages are required for AI to combat hate speech (Goldberger, 2022):
- Data must be collected from a variety of sources in order to train an AI model.
- Once collected, the data must be labelled as to whether or not it contains hate speech.
- The labelled data is then used to train an AI model with machine learning algorithms that learn to identify and detect hate speech.
- Validating the design

Process of AI combating hate speech | Source from: WORLD ECONOMIC FORUM
Facebook is widely recognised and criticised for its ongoing spread of toxic speech and online harassment (Sinpeng et al., 2021). As a result, Facebook has taken steps to address this problem, including improving its machine learning detection filters, expanding artificial auditing, and interacting with stakeholders (Facebook, 2020).
But even though Facebook is taking serious steps to stop hate speech on its site, there is still a lot of work to be done. A civil society study from 2021 claims that Facebook failed to address organised hate against minorities based on their race, religion, or gender. The reason for this is that it is still hard to find hate speech and react to it in different social, cultural, and multilingual settings (Avaaz, 2019).
As we have seen in the past, artificial intelligence (AI) is not perfect and can contribute to prejudice and discrimination. One instance is when Google’s picture recognition software mistakenly classified two black persons as “gorillas” (Zhang, 2015). At the time, this led to significant popular discontent. Even while racist discrimination algorithms like these are no longer employed, they still make us wonder what factors should be taken into account if AI is to be used to identify and silence hate speech.
Case Study of the “Ban the Scan” Global Campaign
Look at one such instance. Amnesty International began a global campaign to ban the use of facial recognition technology in January 2021. The group raised the possibility that law enforcement may use these devices as weapons to target marginalised groups all around the world.

Black Lives Matter protesters in New York, August 2020 | Source from: Ban dangerous facial recognition technology that amplifies racist policing
Law enforcement’s use of facial recognition technology has been sensitive for several years. People of colour in New York City, for example, are at risk of erroneous arrests and police abuse since the NYPD has tracked thousands of New Yorkers using facial recognition technology. The City Justice Centre’s executive director of the Monitoring Technology Monitoring Programme, Albert Fox Cahn, claims that this technology is not only flawed and biassed but can also be created by illegally scraping millions of images from user profiles on social media and driver’s licences (Ban facial recognition technology, 2021).
So, the controversial use of facial recognition technology by law enforcement raises serious ethical and human rights concerns. It’s important that we keep an eye on how facial recognition technology is being used, and there should be ethical principles in place to control it.
Let’s examine the ethical issues that AI technology does not take into account, using this incident as a case study. On digital media, concerns about the potential loss of personal privacy are constantly raised. According to Flew (2021), the right to privacy is not a fundamental human right. It must be a right based on the unique social and legal circumstances.
A key ethical consideration mentioned here is privacy.
In the current environment, when law enforcement uses face recognition technology for surveillance, it is used to secretly track people’s behaviours and movements, leading to many privacy violations and probable civil liberties violations. Before the implementation of “Ban the Scan,” there were instances of law enforcement employing face recognition technology to forcibly enter a citizen’s home without a warrant, further increasing privacy concerns (Ban facial recognition technology, 2021). Here, we can see that, at the time, there were no established rules or laws regarding the use of facial recognition by law enforcement.
Another essential consideration is fairness and non-discrimination.
The campaign’s major concerns include the issue of racial prejudice brought on by this technology. Amnesty International’s website says that face recognition hurts the rights of minorities and people with darker skin because they could be wrongly identified and held without cause. This might be brought on by a lack of adequate regulation as well as algorithmic bias in facial recognition technology.
Ethical Considerations for Developing and Implementing AI Technologies for Hate Speech
Ethics can be thought of as a moral principle that guides how AI technology is made and used. As noted by Pasanen (2022), “AI requires moral guidelines as much as human decision-making because it does things that would normally require human intelligence.” This idea can guarantee that AI technology treats people and groups equally. Similar to the situation in the case study mentioned earlier, using AI technology without sufficient ethical laws will result in severe risks.

Source from: DOD AI Ethical Principles Offer Strength, Opportunity
There are a number of moral issues to think about when using AI to stop hate speech.
- Data and privacy governance: To detect hate speech, AI systems may gather and analyze personal data. It is crucial to make sure that this information is gathered and handled in a manner that respects people’s privacy. A number of data protection laws must be followed when using this technology to protect people’s right to privacy.
- Bias, fairness, and transparency: AI systems can be trained on biassed data, which can lead to biassed decision-making. Therefore, it is crucial for businesses and platforms to make sure that the data used to train AI models is varied, without prejudice, and representative of all groups. Inform users of the use of AI technologies in a clear and concise manner. Building user trust requires this transparency.
- Accountability: Both organisations and individuals who make and use AI systems should be held responsible for the results of their work. This includes determining whether AI systems are reliable, secure, and won’t harm people or groups.
- Human behaviour: To counteract hate speech, we shouldn’t solely rely on AI systems. For AI systems to make fair decisions and to keep getting better with time as a result of user feedback, it is necessary that they be overseen by humans.
- Free speech: While it’s crucial to stop hate speech, it’s also important to maintain the right to free speech. At that point, it’s necessary to strike a balance between stopping hate speech and protecting free expression.
Conclusion
This blog covers topics including defining hate speech and challenges in applying artificial intelligence. In order to protect digital assets and advance fairness, transparency, and accountability, it must be paramount to ensure that ethical considerations are taken into account when creating and applying AI tools. The use of artificial intelligence (AI) in identifying and removing hate speech should be governed by an ethical framework that considers concerns like prejudice, privacy, and freedom of expression.
References
A Declaration of the Independence of Cyberspace. (2016, January 20). Electronic Frontier Foundation. https://www.eff.org/cyberspace-independence
Avaaz. (2019, October). Megaphone for Hate: Disinformation and Hate Speech on Facebook During Assam’s Citizenship Count.
Ban facial recognition technology. (2021, January 26). Amnesty International. https://www.amnesty.org/en/latest/press-release/2021/01/ban-dangerous-facial-recognition-technology-that-amplifies-racist-policing/
Crawford, Kate (2021) The Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence. New Haven, CT: Yale University Press, pp. 1-21.
Digital 2023: Global Overview Report — DataReportal – Global Digital Insights. (2023, January 26). DataReportal – Global Digital Insights. https://datareportal.com/reports/digital-2023-global-overview-report
Flew, Terry (2021) Regulating Platforms. Cambridge: Polity.
Goldberger. (n.d.). The solution to online abuse? AI needs human intelligence. World Economic Forum. Retrieved April 11, 2023, from https://www.weforum.org/agenda/2022/08/online-abuse-artificial-intelligence-human-input/
Michael Baggs. (n.d.). Online hate speech rose 20% during pandemic: “We’ve normalised it.” BBC News. Retrieved April 11, 2023, from https://www.bbc.com/news/newsbeat-59292509
Parekh, B. (2012). Is There a Case for Banning Hate Speech? In M. Herz & P. Molnar (Eds.), The Content and Context of Hate Speech: Rethinking Regulation and Responses (pp. 37-56). Cambridge: Cambridge University Press. doi:10.1017/CBO9781139042871.006
Pasanen, J. (2022, April 28). AI Ethics Are a Concern. Learn How You Can Stay Ethical. AI Ethics Are a Concern. Learn How You Can Stay Ethical. https://learn.g2.com/ai-ethics
Sinpeng, A., Martin, F., Gelber, K., & Shields, K. (2021, July 5). Facebook: Regulating hate speech in the Asia Pacific. Final Report to Facebook under the auspices of its Content Policy Research on Social Media Platforms Award. Dept of Media and Communication, University of Sydney and School of Political Science and International Studies, University of Queensland.
Singer, P. W. (Peter Warren), and Emerson T. Brooking. LikeWar : the Weaponization of Social Media. Boston: Houghton Mifflin Harcourt, 2018. Print.
Suzor, N. P. (2019). Lawless: the secret rules that govern our digital lives. Cambridge: Cambridge University Press. https://doi.org/10.31235/osf.io/ack26
Tirrell, L. (2017). Toxic Speech: Toward an Epidemiology of Discursive Harm. Philosophical Topics, 45(2), 139–162. https://www.jstor.org/stable/26529441
Zhang, M. (2015, July 1). Google Photos Tags Two African-Americans As Gorillas Through Facial Recognition Software. Forbes. https://www.forbes.com/sites/mzhang/2015/07/01/google-photos-tags-two-african-americans-as-gorillas-through-facial-recognition-software/
Be the first to comment