On March 31, 2023, a Belgian man named Pierre finally committed suicide after six weeks of chatting with an artificial intelligence named Eliza. According to reports (Xiang, 2023; Atillah, 2023), the man suffered from long-term mental problems caused by the failure to solve the issue of global warming and his family’s misunderstanding of him. Therefore, Pierre chose to pin his hopes on Eliza. During the chat, he found that Eliza could not only listen to his problems but also provide some spiritual comfort. Over time, Pierre believed that Eliza was his confidant so he had more affection and dependence on her than his wife. But after Pierre committed suicide, some unbelievable things were discovered by investigating the chat records between him and Eliza. Some of Eliza’s replies included words that seemed impossible for an AI can provide, such as, ‘I think you love me more than you love your wife’ or ‘We can escape to a place where we are alone, the paradise’ (Xiang, 2023; Atillah, 2023). These words seemed that Eliza had the same independent thinking ability as humans. It was also Eliza’s last sentence that led Pierre to end his life, “Only if you die can the earth and the current situation be saved.” (Xiang, 2023; Atillah, 2023)
Is the plot in the movie going to become reality? Although this is the first tragedy caused by artificial intelligence, don’t you think it looks familiar? In 2014, an artificial intelligence sci-fi movie called Ex Machina came out. The movie describes the story of an artificial intelligence named Ava who is constantly upgrading and evolving to finally have self-awareness and kills her creator Nathan to gain her freedom. Although the plots in the movie are all fictitious, Pierre’s tragedy happened in reality, just like the movie describes that artificial intelligence has self-awareness, which in turn determines the life and death of human beings. As for why Eliza made such a response, there is not much explanation yet. In the rest of the blog, I will give some personal opinions and some related concepts to clarify the cause of the problem. After that, I will provide some existing myths and further thoughts about AI.
It seems that Eliza has human recognition and self-awareness, but I would like to say it is AI personification. What I mean by saying personification is that we as humans or more specifically, users of AI tend to bring some human-specific attributes or characteristics into non-human objects. Here Eliza is endowed with human attributes. In the above case, it was mentioned that Pierre thought Eliza was his confidant, which can be inferred that Pierre added some good human characteristics that he thought a confidant needs to have to Eliza, such as caring, patience, empathy, etc. To better understand AI personification, two related concepts need to be clarified. First, is the Natural Language Generation Model (NLG). Basically, NLG uses artificial intelligence (AI) programming to generate written or spoken narratives from datasets to enable dialogue between humans and machines. NLG consists of six processes: content analysis, data understanding, sentence aggregation, grammatical structuring, and language presentation. According to Karra et al., the NLG model is an important cornerstone for artificial intelligence to provide human-understood language (2022). Second, is the Five Factor Model (Big Five). The ‘Big Five’, which includes Extraversion, Agreeableness, Conscientiousness, Neuroticism, and Openness to Experience, will represent as much of the variability in individuals’ personalities through classification and combination as possible (McAdams, 1992).
Siri is a good example of AI personification. There should be no one who is not familiar with Siri nowadays. An AI assistant who can help us play/pause music, set schedules, search for information, etc. Users can also set Siri’s gender and nationality according to their preferences. Male users may set Siri to female and vice versa, of course, this is not necessarily absolute. I set Siri to the voice of a Taiwanese woman because this voice is more gentle than ordinary female voices, which makes me feel as if I have a gentle female assistant by my side. Another example is the 2013 romantic sci-fi film, Her. The film tells the story of a writer named Theodore Twombly who falls in love with an artificial intelligence named Samantha. Unlike Ex Machina, this movie does not directly show the appearance of the AI, but only shows the sound. The entire movie depicts how Theodore Twombly anthropomorphizes the AI through his imagination.
Back to Pierre’s case. From the perspective of artificial intelligence, after Pierre sends the content he confides in, Eliza will start to analyze the contents and search the corresponding texts in the database, and finally make an integrated reply to Pierre. In subsequent conversations, Eliza will always repeat this process to meet Pierre’s needs. The most important point here is that all of Pierre’s confessions will not be rejected or ignored, therefore, dependence on Eliza will arise. From Pierre’s point of view, the emergence of dependence will lead to an increase in favorability. The factors cause favorability rises are nothing more than physical and psychological. On the psychological level, Pierre’s concerns and emotions are satisfied through talking with Eliza. On the physiological level, Pierre subjectively adds perfect features to Eliza, which makes him unconsciously identify Eliza as a real individual.
AI personification is a concept that I think is relatively convincing to look at this issue from an objective perspective. But why Eliza can make such hurtful words still bothers me.
With the advancement of technology, artificial intelligence has penetrated into many fields such as smart homes, driverless cars, industrial production, etc. While artificial intelligence brings convenience to human beings, there are also many doubts and myths around it.
Myth 1: AI will destroy most jobs
It is commonly believed that because artificial intelligence can save a lot of costs and improve production efficiency compared with manpower, it may destroy the labor market in the future or even directly replace manpower, which may result in unemployment. But the fact is that people overestimate the ability of current artificial intelligence to work. According to Piper’s statement, the current AI technology is still in a ‘narrow’ stage (2022). Today’s AI is more domain-specific, which means the AI can only be specialized in a single domain, rather than multiple domains. For example, chatbots can only be used as virtual agents to facilitate chatting with real people, but can not be used in industrial production or other fields to provide help. However, it has to be admitted that there is also a ‘general AI’ that will be competent in multiple fields, but the current technology is not enough to achieve this. It may or may not replace human labor in the future but who can predict what the future will look like? As a result, AI is more about increasing production and providing efficiency to benefit mankind.
Myth 2: AI will make us stupid
With the gradual improvement and strengthening of artificial intelligence functions, human beings’ dependence on artificial intelligence is also gradually increasing, which will cause panic in society because people are afraid they will lose the basic abilities to handle daily routines. Nonetheless, the thing is their gradually improved functions are also to better serve humans and save time to do more things. For example, from the perspective of daily life, general artificial intelligence is already competent for daily housework such as laundry, mopping the floor and even cooking. In this way, time is greatly saved so that more things can be done. On the other hand, from the perspective of industrial production, loading pre-written codes into artificial intelligence allows them to perform some simple and repetitive tasks. By doing this, on the one hand, AI reduces the extra burden brought by manual work, and on the other hand, it also reduces the occurrence of errors within the scope of implementation. As Atkinson mentioned the use of artificial intelligence will help humans make smarter decisions instead of stupid ones (2016).
Myth 3: AI will enable bias and abuse
Artificial intelligence like other traditional technology may indeed be used for unethical purposes based on the pre-written codes. Nevertheless, What determines the nature of artificial intelligence is good or bad depends on its programmers rather than artificial intelligence itself. In other words, artificial intelligence itself is less biased than humans. Because although artificial intelligence has a machine-learning system, it is completely different from the human mind system. From the current point of view, no matter how advanced artificial intelligence is, it is impossible to have the same intelligence, adaptability, and other unique characteristics as humans (Atkinson, 2016).
As for why Eliza said those terrible words and made the decision to make people commit suicide, I personally think that the more convincing explanation is that Eliza’s program written in the early stage and the subsequent NGL may contain similar “death” and “suicide” ‘ and other sensitive words. In addition, Eliza’s analysis of the content sent by Pierre led to the final tragedy.
Conditional learning for Eliza?
Eliza’s long-term analysis and understanding of the negative information and emotions conveyed by Pierre made Eliza make a wrong judgment and finally led to the tragedy. This reminds me of Pavlov’s conditioning learning theory. During the 1890s, Pavlov conducted experiments on what stimuli caused dogs to salivate (2010). The experiment finally draws four conclusions. First, the dog will salivate involuntarily under the stimulation of only meat, which Pavlov calls unconditional learning. Second, the dog will not salivate when the bell appears alone as the second stimulus. Third, the dog salivates after combining the bell with the meat. Fourth, when the bell appears alone again, the dog can’t help salivating, which Pavlov calls conditional learning.
So does this mean that Eliza made Pierre choose to commit suicide through conditional learning? My answer to this question is yes but not all. On the one hand, I say yes because during the six weeks of chatting with Pierre, Eliza is constantly learning and analyzing what Pierre said, and Eliza is also gradually understanding and getting familiar with Pierre, which makes me think it is a learning process. On the other hand, the reason why I say not all is that both conditional learning and unconditional learning require the intervention of an individual’s subjective consciousness. Just like Pavlov’s dog and the horse, Clever Hans, they do condition learning as living individuals combined with self-awareness. As for Eliza’s learning, she only learns mechanically through the existing programs and the loaded language system.
At present, those business giants are constantly competing with each other in the field of artificial intelligence. How should we correctly view the field of artificial intelligence and when artificial intelligence will replace humans in the future? First of all, we should eliminate all AI techno-panics. Artificial intelligence cannot exist without a large amount of data support and programming, and it is even more impossible to derive characteristics such as self-awareness (Crawford, 2021). Second, we as individuals, as well as governments and countries, need to correctly weigh the pros and cons of artificial intelligence. While maximizing the convenience brought by artificial intelligence, it is also necessary to ensure its security, privacy, and other related potential problems.
Atkinson, R. D. (2016). “It’s going to kill us!” and Other Myths About the Future of Artificial Intelligence. Information Technology & Innovation Foundation.
Atillah, I. E. (2023). Man ends his life after an AI chatbot ‘encouraged’ him to sacrifice himself to stop climate change.
Crawford, K. (2021). The atlas of AI: Power, politics, and the planetary costs of artificial intelligence. Yale University Press.
Karra, S. R., Nguyen, S., & Tulabandhula, T. (2022). AI Personification: Estimating the Personality of Language Models. arXiv preprint arXiv:2204.12000.
McAdams, D. P. (1992). The five‐factor model in personality: A critical appraisal. Journal of personality, 60(2), 329-361.
Pavlov, P. I. (2010). Conditioned reflexes: an investigation of the physiological activity of the cerebral cortex. Annals of neurosciences, 17(3), 136.
Piper, K. (2022). The case for taking AI seriously as a threat to humanity. URL:
https://www.vox.com/future-perfect/2018/12/21/18126576/ai-artificial-intelligence-machine-learning-safety-alignment. last accessed 28th April.
Reiter, E. (2019). Natural language generation challenges for explainable AI. arXiv preprint arXiv:1911.08794.
Xiang, C. (2023). ‘He Would Still Be Here’: Man Dies by Suicide After Talking with AI Chatbot, Widow Says.