
When Victor Frankenstein created his Monster, he didn’t think far enough to consider the consequences of what might happen when his creation was complete. Was this Monster well-equipped to function independently in the real world? Would its individuality ever be recognised by the humans it interacted with?
When finally the Monster was unleashed, Frankenstein promptly abandoned it to its own devices. And the rest, as they say, is history.
Increasingly, I have found comfort in Mary Shelley’s foresight when grappling with the concept of artificial intelligence (AI).
When trying to understand the need for AI, a familiar wall I keep running into is struggling to grasp just how closely artificial intelligence mirrors human intelligence. The further I dig, however, the more I realise this isn’t a wall at all. It’s a large net, complete with a booby trap to capture my attention.
That’s the whole point.
Of course, artificial intelligence is nothing new. Chances are you will have come across the discourse, particularly in recent times when platforms such as ChatGPT are revolutionising the delivery of thought. If so, there is an even higher change you feel slightly unsettled by it all – that is, a strange and inexplicable niggling in the back of your mind when you see a machine perform an innately human action, or the ease with which a machine can spit out an entire academic passage before you’ve had the chance yourself.
Defining the Undefinable
Defining artificial intelligence is anything but easy. But let’s consider one overarching definition as a foundational understanding. An artificial agent is any “computer, computer program, device, app, machine, robot, or sim that performs behaviours which are considered intelligent if performed by humans, learns or changes based on new information or environments, generalises to make decisions based on limited information, or makes connections between otherwise connected people, information or other agents” (Shank et. al., 2019, pp. 258).
Do you notice anything in this definition? If we were to remove the mediums of AI (computer, device, app), then all of these functions can be performed by humans. When you break it down, AI is essentially the embodiment of humans – so why do we need AI? And when can it get dangerous?
Reputational Harm From AI
Recently, The Guardian (2023) was alerted to the presence of an article supposedly written by one of their employees. On further inspection, it was found to be a fake article generated by ChatGPT. This article had been so well written, and so precisely in line with The Guardian’s ethos, that the person who found it hadn’t suspected a thing. This happened more than once.
ChatGPT is the latest brainchild of OpenAI, an American artificial intelligence research lab. According to their website, ChatGPT is an AI model which “answers follow-up questions, admits its mistakes, challenge incorrect premises and reject inappropriate requests” (OpenAI).
Makes sense, right? Not harmful at all? However, the issue with ChatGPT is that it’s not exactly aware of who is putting this information into its system, what their purposes are or what their underlying biases may be. Somebody claiming to be an employee of The Guardian has no bearing on the AI model whatsoever – ChatGPT is created simply to give an answer to a baseline request.
Once that fake article is released into the public sphere, it’s not ChatGPT’s fault for creating it. This is troubling because ChatGPT can invent sources, and subsequently influence levels of trust in potentially unreliable mediums. A core feature of respectable journalism lies in its integrity, and ChatGPT’s ability to disrupt this so easily is worrying not only for news platforms, but any company that prides itself on valued employee-customer interaction.
Interestingly enough, The Guardian’s head of editorial innovation, Chris Moran (2023), does not seem altogether disturbed by any of this. “We are excited by its potential,” he writes, when reflecting on the value of generative AI, “but first we must understand it, evaluate it and decode its potential impact on the wider world.”
At the cost of journalistic integrity, however, just how valuable can AI be? Anyone who decides to engage with AI would always do well to remember that you can never be sure what an AI model has learned with the data it has been given; conversely, it will never interpret the variables of your life outside the computer (Crawford, 2021).
The Unreliability of AI
In another recent instance, a few Samsung employees reportedly leaked sensitive data through the use of ChatGPT, not once, but three times. According to a Mashable (2023) article, while using ChatGPT for help at work, the employees accidentally shared confidential information, which included pasting a confidential source code into the bot to check for any errors, sharing a code with ChatGPT to “request code optimisation” and sharing a recording of a meeting in order to convert the recording into notes. As Cecily Mauran (2023) from Mashable reminds us, the information shared with ChatGPT is never kept private, and likely “used to further train the model.” That is why it’s always worth noting the kind of information that is entered into the platform.
Kate Crawford, author of the book Atlas of AI, notes that the very idea of AI is “promiscuous” – it is malleable, messy and commands a large spatial and temporal reach. It is constantly open to reconfiguration, and this very fact is worth considering when we interact with emerging and imperfect platforms like ChatGPT (Crawford, 2021, pp. 19).
Fundamentally, artificial intelligence is an experiment in future avenues of thought, and how machine-based technologies can embody human intelligence without human intervention. With something as abstract as AI, which hinges crucially on the fluid nature of technology and innovation, the parameters of its understanding need to be flexible, to “allow the field to grow” (Abbass, 2021, pp. 94).
Everybody and every organisation interacts differently with AI, resulting in its “multidisciplinary nature” (Abbass, 2021, pp. 94), and thus, this space for interpretation means that AI models are not complete in their current form. Platforms like ChatGPT collect information from its users, feeding the function of the model, and as it is used more, knowledge about its potential is continuously uncovered. During this process, it is also being developed by OpenAI with the information collected. The problem, however? An imperfect tool is being “deployed without the necessary safeguards being put in place” (Doshi et. al., 2023, p. 7).
In the case of Samsung, the leaked data is not able to be recalled, nor will it be attempted by the company itself. Their solution? To build their own internal AI chatbot to prevent such a thing from happening again (Mashable, 2023). The same patterns are being repeated here; as with The Guardian, companies seem to be interested in the power of AI even after being exposed to its voracious nature.
It would be at this point where I ask you what we can take away from Mary Shelley’s tale.
So What’s the Problem?
Experts are widely at a consensus that with AI, regulation is key. But as we’ve established with ChatGPT, many AI platforms lack regulation. In Atlas of AI, Crawford highlights that within the AI sphere, there exists a certain myth that artificial intelligence is “something that exists independently”, as though it were completely separate from human forces like society, culture, history and politics (Crawford, 2021, pp. 5).
Publicly, AI is often viewed as though it has a mind of its own, a force that arrived in the night under the guise of darkness, and began to permeate our society before we could notice. This is the first issue with AI awareness. Not only does this ignore the years of progress and change which has moulded human intelligence into what it is today, but it also erases the acknowledgement of human error when creating AI platforms.
Crawford (2021) asserts that AI is neither artificial nor intelligent. In reality, it is the product of innately human intelligence, created through years of labour, research, resources, logistics, histories and classifications. They are not autonomous robots with minds of their own, but rather machines which are specifically designed to produce a desired outcome.
The research surrounding AI has also consistently revealed biases which are characteristically present in the creation of these platforms. In one instance, when ChatGPT was asked whether someone would make a good scientist based on race and gender, the bot determined a scientists’ worth based on whether they were white or male (Lin, 2022, as cited in in Doshi et. al., 2023). This is definitely concerning given the reach ChatGPT has, but overall not entirely surprising given that AI can “perpetuate the prejudice of the data on which they are trained” (Doshi et. al., 2023, pp. 6).
You could argue that biases are also inherently present in humans. This is true. Where it differs is when platforms like ChatGPT are utilised by companies who are at risk of potentially revealing biases where they are not intended.
Considering The Guardian case again, unchecked published articles which do not actually reflect the newspaper’s ideology is harmful to its reputation. Discerning between a real and fake Guardian article is hard even for the experts. To the average reader, it is almost impossible.
Why Isn’t Being Human Enough?
After all this, we must ask ourselves: why isn’t being human enough? Humans can accomplish everything that AI can – in theory, anyway; if we are the ones creating AI, clearly we also possess the capabilities of spitting out what a machine can, right?
But rarely is it this simple. Scientifically speaking, AI has astonishing potential in the STEM field, making sense of different types of data, including but not limited to that of electronic health records, digital sensor monitoring of health signs and getting measurements from ‘omics’ such as genomics, proteomics and metabolomics (Nature, 2015).
According to the Harvard Business Review (2018), an appropriately created AI machine, with proper regulations in place, can “amplify our cognitive strengths” (such as Google), “interact with customers so employees can be freed to perform higher level tasks” (like customer service bots on websites), and “embody human skills to extend our physical capabilities” (like machines in big factories).
The problem arises when AI is created to embody human intelligence with emotional vulnerability, partly because this is essentially impossible. The ethics of AI are not as serious when we’re talking about the kind of AI that performs mundane tasks. However, when machines are expected to “emulate us as social beings and attempt to integrate fundamentally human ideas of judgement, empathy or fairness into an AI equation,” says CEO of LatentView Analytics, Rajan Sethuraman, “the more we face the same ethical issues that accompany human interactions” (Forbes, 2019).
These ethics are complicated enough for humans, but now we’ve got a whole other system to worry about, one that is constantly developing on its own with the tools that feed its growth. It won’t be long until such systems begin thinking for themselves on larger scales, with increasingly disastrous consequences.
Walz identifies that AI has the potential to result in “fundamental changes in methods of social interactions, and ultimately even impact what is considered to be formative for human self-perception” (Walz, 2017, pp. 758). In other words, constantly evolving AI could profoundly influence how we view intelligence – where do humans stop thinking and when does AI begin filling in the gaps?
Ultimately, AI is beneficial to our society, if it is regulated and monitored correctly. The perception of AI as an entity separate to its human creators is detrimental to its success in a safe and beneficial way.
For now, there seems to be no slowing down with AI advancements. It would seem people are more receptive to AI that is humanly centred. Historically speaking, humans function best when they create something larger than themselves and let it loose before having understood its true potential.
Victor Frankenstein failed to understand his moral obligation towards his creation. Perhaps the best we can do now is to watch the plight of Frankenstein’s monster from afar, and hope we have shelter when it comes knocking at our door.
References
Abbass, H. (2021). Editorial: What is Artificial Intelligence? IEEE Transactions on Artificial Intelligence, 2(2), 94–95. https://doi.org/10.1109/TAI.2021.3096243
Crawford, K. (2021). The Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence. New Haven: Yale University Press. https://doi.org/10.2307/j.ctv1ghv45t
Doshi, R. H., Bajaj, S. S., & Krumholz, H. M. (2023). ChatGPT: Temptations of Progress. American Journal of Bioethics, 23(4), 6–8. https://doi.org/10.1080/15265161.2023.2180110
Ethics of artificial intelligence. (2015). Nature (London), 521(7553), 415–. https://doi.org/10.1038/521415a
Fast Company. (2022). How to trick OpenAI’s ChatGPT. https://www.fastcompany.com/90819887/how-to-trick-openai-chat-gpt
Forbes. (2019). How Do We Create Artificial Intelligence that is More Human? https://www.forbes.com/sites/jenniferhicks/2019/03/19/how-do-we-create-artificial-intelligence-that-is-more-human/?sh=1eea980d1492
Harvard Business Review. (2018). Collaborative Intelligence: Humans and AI Are Joining Forces. https://hbr.org/2018/07/collaborative-intelligence-humans-and-ai-are-joining-forces
Journal of Medical Ethics. (2020). Revisiting the lessons of Frankenstein. https://blogs.bmj.com/medical-ethics/2020/02/11/revisiting-the-lessons-of-frankenstein/
Mashable. (2023). Whoops, Samsung workers accidentally leaked trade secrets via ChatGPT. https://mashable.com/article/samsung-chatgpt-leak-details
OpenAI. (2015-2023). https://openai.com/blog/chatgpt
Shank, D. B., Graves, C., Gott, A., Gamez, P., & Rodriguez, S. (2019). Feeling our way to machine minds: People’s emotions when perceiving mind in artificial intelligence. Computers in Human Behavior, 98, 256–266. https://doi.org/10.1016/j.chb.2019.04.001
The Guardian. (2021). ChatGPT is making up fake Guardian articles. Here’s how we’re responding. https://www.theguardian.com/commentisfree/2023/apr/06/ai-chatgpt-guardian-technology-risks-fake-article
Walz, A. (2017). A Holistic Approach to Developing an Innovation-Friendly and Human-Centric AI Society. IIC – International Review of Intellectual Property and Competition Law, 48(7), 757–759. https://doi.org/10.1007/s40319-017-0636-4
Be the first to comment