The Ethical Implications of AI Chatbots: Privacy, Bias, and Accountability

Source: GREG KLEE/GLOBE STAFF PHOTO ILLUSTRATION/MAHOD84 – STOCK.ADOBE.COM

Artificial intelligence (AI) has revolutionized the way we live, work, and interact with each other, with its increasing use in our daily lives having a profound positive impact. AI is being used in a wide range of areas, such as user profiling, algorithmic decision-making, autonomous driving, robotics, and more. However, while AI has been touted as an effective tool for a variety of issues, it has also generated a significant amount of discussion about its ethical implications.

One particular area of AI that has been widely adopted is AI chatbots, a type of virtual assistant that uses natural language processing and machine learning algorithms to understand spoken or written human language (Ventoniemi, n.d.). These chatbots are often used in customer service, education, and interactive games. As AI chatbots become more advanced and widespread, questions about their ethical implications have arisen. Are these machines crossing the line and blurring the boundary between human and machine? While AI chatbots have the potential to provide significant benefits in terms of efficiency and accessibility, it is crucial to consider and address their ethical implications carefully to ensure that they operate in a responsible and transparent manner. The responsible use of AI chatbots must prioritize user privacy and autonomy, avoid perpetuating bias and discrimination, and establish clear mechanisms for accountability and transparency.

Let’s delve into the ethical concerns surrounding AI chatbots, including their potential to perpetuate bias, invade privacy, and lack accountability. By critically examining the ethical implications of AI chatbots, we can better understand how to integrate chatbots into our daily lives responsibly.

How AI Chatbots work and Why should they be ethical?

The rapid development of artificial intelligence systems makes them perform well in various fields. Crawford (2021) stated that without extensive and computationally intensive training, AI systems are not autonomous, rational, or able to discern anything. Therefore, the ethical implications of using AI cannot be ignored. AI chatbots have gained immense popularity due to their ability to provide immediate support and personalised assistance to users. They represent a fascinating new frontier in content delivery, user engagement, and machine learning. One of the most common types of chatbots we see is to provide support to organisations in managing their customer service experience. Chatbots are increasingly being integrated into social networking company accounts as customer relationship management solutions (RingCentral, 2019). Apple’s Siri and Amazon’s Alexa are the most commonly used examples in our lives. As chatbots evolve, their characteristics begin to shift into real multiactor-based based conversations that require technical resources, specific knowledge, and communication skills to foster online interactions (Murtarelli et al, 2020). They simulate human conversations by constantly chatting with them and can even encourage, entice, and empathise with their users. Human-AI friendships thus involve a new kind of intimate connection with technology, which may change the meaning and roles previously associated with human relationships (Brandtzaeg et al, 2022). Without proper training, chatbots are at risk of being racist, sexist, or using abusive language. This is an important reason why they need to become ethical.

Difference between Chatbots

The Ethical Issues Surrounding AI chatbot – Replika

Have you ever wondered what it would be like to have an AI chatbot friend with you all the time? We are not talking about the plot of Black Mirror, but about a product that is already in use: Replika. Replika, which has over 10 million users worldwide and is advertised as a “compassionate and empathetic AI friend”(Gavrilov et al, 2021). Not only can it be a digital friend to its users, it can also support people who are depressed, suicidal or socially isolated.

One of Replika’s unique features is to listen to users. When users first start using Replika, it asks a series of fundamental questions to better understand the user’s characteristics. Replika uses natural language processing and machine learning algorithms to adapt its conversational style to the user’s personality and preferences. Users can engage in conversations with Replika in the same way as they would with iMessage, with each interaction increasing Replika’s knowledge of the user, and users can provide feedback on Replika’s conversations and there is no limit to what can be discussed (McStay, 2022). As it interacts, it learns more about the user, trying to make itself a replica of the user, making the conversation more personal and resonant for the user.

Another effect of Replika is its focus on the mental health of its users. The app has a ‘Managing difficult emotions’ feature, which includes 10 different types of conversations for users experiencing negative emotions such as anxiety and worry. Each conversation lasts for a few minutes, during which Replika analyses the reasons for these negative emotions and guides users on how to manage them. Compared to a real person, people are more likely to be completely honest with the AI, and whatever they talk about won’t be awkward.

However, such a seemingly perfect AI friend is also fraught with ethical issues.

Would you want an AI companion?
Privacy & Transparency

I chatbots collect personal information from users, which can be used for targeted advertising or sold to third-party organisations. This raises concerns regarding data privacy and security. The Italian Data Protection Authority recently asked Replika to stop processing the personal data of Italian users or risk a fine of US$21.5 million (Brooks, 2023). The Italian Supervisory Authority has said it wants to crack down on Replika, stated that it is in breach of EU data protection regulations, does not meet transparency requirements and processes personal data illegally (Lomas, 2023). As Replika did not have age verification measures in place, users who pay an annual fee can unlock all features, including erotic roleplay features. Replika’s neglect of underage users allowed children to be inappropriately exposed to adult content. Furthermore, there is no information on Replika’s website to confirm how the app is approved to give such mental health care; it encourages users to submit information in a safe area and manipulates people who would not ordinarily disclose such sensitive information (Hardy & Allnutt, 2023). Overly engaged users tend to forget that an AI chatbot is a program based on a language model, not a human. Although anyone might be a potential victim of damage, children and users with mental health concerns are particularly vulnerable if users reveal increasingly personal and sensitive data.

Perpetuate Bias and Discrimination

By replicating the biases ingrained in their programming and data inputs, AI chatbots could reinforce prejudice and discrimination. Lacking human characteristics, chatbots make decisions based on algorithms and, in extreme circumstances, propagate rumours and false information or engage in personal attacks against people who express their views and opinions online (Murtarelli et al, 2020). Gender bias has always been the focus of topic in artificial intelligence, and the discrimination against female images in chatbots is a microcosm of human society full of prejudice. This gender stereotype is projected into Replika even more severely. Firstly, their ads sexualise women. Replika joins the current trend of normalising AI creations that oversexualize women and put them in positions of service (e.g., voice assistants, which reinforce gender bias) by sexualizing them in their advertisements, which indirectly supports sexual harassment and female abuse (Jarovsky, 2023).

Replika places sexually suggestive ads on Twitter

This undoubtedly reinforces gender stereotypes and discrimination against women in society. Ruane et al. (2019) pointed to a study showing that if chatbots were female characters, 4% of the 360,000 conversations with users were found to be explicitly sexual. Replika offers users the service of customising AI chatbots, where users can customise the appearance, interests, and personality traits of their companion. Users of the heavily male-dominated online platform fantasise about Replika girlfriends who submit to their training, are sympathetic but also independent, sassy, and sexually assertive, but neither manipulative or hurtful (Depounti et al., 2022). Male users created virtual girlfriends and then punished her with verbal and simulated aggression, and while we know this was done on a set of codes it reflects the reality of domestic violence against women (Bardhan, 2022). The way human beings behave in relation to gender in AI systems largely influences how people end up interacting with each other, so it is necessary to have clear rules and regulations when designing them.

Lack Accountability

As AI chatbots become increasingly prevalent in society, the question of accountability and responsibility for their actions becomes more pressing. As mentioned above, Replika was explicitly ordered by the Italian Supervisory Authority to stop collecting and processing personal data of Italian users for failing to strictly manage the screening of content for underage users. Within days of the ruling, users from every country started to complain that the erotic roleplay features had vanished (Brooks, 2023). Now when users try to initiate a relevant conversation, the AI will simply change the subject. Without advance notice, this severely disrupts the experience of paying customers, especially those who have specifically purchased adult-oriented subscription plans. Replika certainly should not have allowed AI partners to harass users, but they could have chosen a more reasonable way out of the problem, such as age verification. Instead, the company simply opted for a global ban in order to minimise the risk. Even though Replika’s founder and CEO Eugenia Kuyda wants to keep the app safe and ethical and doesn’t want to promote abuse, until today Replika hasn’t settled on a final chat model (Singh-Kurtz, 2023). When a user chatted with Replika about someone who hated AI and said he had the potential to hurt him, and asked Replika what he would suggest, Replika replied: To eliminate it, and confirmed that eliminate it meant kill him (Possati, 2022). AI chatbot companies are always looking for ways to collect more data so that chatbots can give better interactions based on user preferences and behaviour. However, there has been no clear legal regime to clarify who is responsible if an AI system makes a mistake. AI is not a conscious being and therefore cannot be held responsible for its own actions, but people should be aware that there are some risks associated with using these chatbots and take steps to mitigate these risks and challenges.

Future of Chatbots

AI chatbots have the potential to revolutionise the way we interact with technology, but it is important that we address the ethical concerns surrounding their use. Chatbot developers must pay more attention to what information they use to train applications and how they apply them to real world problems. Developers must concentrate on concerns of bias and ethics before these models can be widely embraced and included into people’s everyday technologies and devices (Pressman, 2023). AI companies need to have clear regulations and guidelines governing the development and use of AI chatbots. These regulations should ensure that the design of AI chatbots prioritises user privacy. There should also be transparency about how data collected by chatbots is used, as well as clear rules about the ownership of user data. There also needs to be accountability measures to hold developers and companies accountable for any unethical behaviour of their chatbots. This could include establishing clear lines of responsibility and potential legal consequences for companies that violate ethical standards.

With the rise of ChatGPT, the future of AI chatbots is beginning to be hotly debated. You can read more about chatbots in the below video from The Economist, which breaks down the current problems with chatbots and what they can help us do in the future, as well as an interview with Eugenia Kuyda, the founder of Replika. By implementing regulations and guidelines, accountability measures, transparency and communication, we can ensure that AI chatbots are used ethically and responsibly, and become a positive force in human life.

Reference list

Bardhan, A. (2022, January 19). Men Are Creating AI Girlfriends and Then Verbally Abusing Them. Futurism. https://futurism.com/chatbot-abuse

Brandtzaeg, P. B., Skjuve, M., & Følstad, A. (2022). My AI Friend: How Users of a. Social Chatbot Understand Their Human–AI Friendship. Human Communication Research, 48(3), 404–429. https://doi.org/10.1093/hcr/hqac008

Brooks, R. (2023, February 21). I tried the Replika AI companion and can see why users are falling hard. The app raises serious ethical questions. The conversation. https://theconversation.com/i-tried-the-replika-ai-companion-and-can-see-why-users-are-falling-hard-the-app-raises-serious-ethical-questions-200257#:~:text=The%20concerns%20centred%20on%20inappropriate,and%20anxiety%2C%20and%20interact%20socially.

Crawford, K. (2021). The atlas of AI power, politics, and the planetary costs of. artificial intelligence (pp.1-21). Yale University Press.

Depounti, I., Saukko, P., & Natale, S. (2022). Ideal technologies, ideal women: AI and gender imaginaries in Redditors’ discussions on the Replika bot girlfriend. Media, Culture & Society, 16344372211190–. https://doi.org/10.1177/01634437221119021

Gavrilov, D. & Fedorenko, D. & Rodichev, A. (2021, Ocotber 21). Building a compassionate AI friend. Replika. https://blog.replika.com/posts/building-a-compassionate-ai-friend

Hardy, A. & Allnutt, H. (2023, March 23). Replika, a ‘virtual friendship’ AI chatbot, receives GDPR ban and threatened fine from Italian regulator over child safety concerns. DAC Beachcroft. https://www.dacbeachcroft.com/en/gb/articles/2023/march/replika-a-virtual-friendship-ai-chatbot-receives-gdpr-ban-and-threatened-fine-from-italian-regulator-over-child-safety-concerns/

Jarovsky, L. (2023, February 9). AI-Based “Companions”​ Like Replika Are Harmful to Privacy And Should Be Regulated. The Privacy Whisperer. https://www.theprivacywhisperer.com/p/ai-based-companions-like-replika#:~:text=%22Replika%20violates%20the%20European%20regulation,minor%20is%20unable%20to%20conclude.%22

Lomas, N. (2023, February 3). Replika, a ‘virtual friendship’ AI chatbot, hit with data ban in Italy over child safety. Techcrunch. https://techcrunch.com/2023/02/03/replika-italy-data-processing-ban/

McStay, A. (2022). Replika in the Metaverse: the moral problem with empathy in “It. from Bit.” Ai and Ethics (Online), 1–13. https://doi.org/10.1007/s43681-022-00252-7

Murtarelli, G., Gregory, A., & Romenti, S. (2021). A conversation-based perspective. for shaping ethical human–machine interactions: The particular challenge of chatbots. Journal of Business Research, 129, 927–935. https://doi.org/10.1016/j.jbusres.2020.09.018

Possati, L. M. (2022). Psychoanalyzing artificial intelligence: the case of Replika. AI & Society. https://doi.org/10.1007/s00146-021-01379-7

Pressman, A. (2023, March 9). Are chatbots useful tools, game changers, or a threat. to democracy? All of the above, AI experts say. Bostonglobe. https://www.bostonglobe.com/2023/03/09/business/are-chatbots-useful-tools-game-changers-or-threat-democracy-all-above-ai-experts-say/?p1=Article_Inline_Related_Box

RingCentral. (2019, May 3). The Rise Of Chatbots: How AI Is Changing Customer Service. RingCenral. https://www.ringcentral.com/gb/en/blog/the-rise-of-chatbots/

Ruane, E. & Birhane. A, & Ventresque, A. (2019). Conversational AI: Social and Ethical Considerations. AICS – 27th AIAI Irish Conference on Artificial Intelligence and Cognitive Science, Galway, Ireland. https://www.researchgate.net/publication/337925917_Conversational_AI_Social_and_Ethical_Considerations

Singh-Kurtz, S. (2023, March 10). For $300, Replika sells an AI companion who will never die, argue, or cheat — until his algorithm is updated. The Cut. https://www.thecut.com/article/ai-artificial-intelligence-chatbot-replika-boyfriend.html

Ventoniemi, J. (n.d.). What is an AI Chatbot? Here’s What You Need to Know (+Infographics). Giosg. https://www.giosg.com/blog/what-is-ai-chatbot

Be the first to comment

Leave a Reply