Artificial Intelligence: Human Emancipation or Terminator Skynet?

AI, dystopian, dangers, artificial intelligence, AGI

Introduction:

ChatGPT, Gemini, Sora and Devin, if any of this sound familiar to you, congratulations you are one of the rare minority of people who are keenly plugged into the nascent AI ecosystem. However, for many others, these terms are foreign and at times fear-inducing due to the information gap on what artificial intelligence entails, what it could do for them and how it may affect their lives. A 2024 RMIT report stated that “almost half (47%) of [Australian] employees have never used generative AI in their role, and 73% say this is because they don’t believe generative AI is relevant” (RMIT, 2024).

With mainstream media’s recent blitz on AI-related news, it has brought realization to the general population that something was rapidly moving in society but most people are yet to react or put a handle on things, creating anxiety and a climate of hoping for the best.

This article hopes to address these latent anxieties about AI and automation and reassure readers that despite the fears and dangers of new consumer-grade artificial intelligence services, people still possess the agency to ride this wave of AI innovation and achieve a better quality of life for themselves and their loved ones.

AI Fears: Cognitive Atrophy

To get an accurate sensing of AI fears, look no further than our education institutions where most if not all students use generative AI systems like ChatGPT and Gemini to aid in their ideation and writing tasks. In most schools, this practice is deemed illegal and school management goes the extra mile clamping down on AI use and tries its best to detect and punish students found “cheating”.

The underlying motivation for such negativity towards generative AI services belies the fear that students have conveniently offloaded their cognitive and critical learning to the AI resulting in what a 2023 research paper deemed “cognitive atrophy” where “essential mental faculties [diminish] due to an over-reliance on automated systems” (Dillu, 2023).

As a result, this dumbing down of human learning would logically lead to a stagnation of human-driven knowledge and an unhealthy unsustainable reliance upon AI systems to run essential facets of our lives. In an event of a catastrophic breakdown in AI or its affiliated automated systems, are humans still capable of filling in the gaps to sustain essential life albeit at a slower pace?

This fear is not without merit as a 2022 investigation found that “GPS-based navigation dependency [like Google maps] was associated with the decreased ability in the efficiency of spatial target search and memory when performing the pathfinding event” (Yan et al., 2022).  However, for every such cautionary tale of technological development, there are corresponding examples proving otherwise, case in point the invention of Google.

One criticism faced early on by Google was how “spending too much time on the search engine could be making us forgetful … [and result in our] memory, like other tools, decaying without exercise” (CRB & team, 2013).

In reality however, rather than making humans dumber, the advent of Google led to it becoming a quintessential part of our lives where the term “Google” has become a noun and humans became adept synthesizers and processors of information at a speed and scale unseen in human history. 

AI Fears: Total Job Displacement (2022 – 2024)

The second AI fear which poses a more immediate concern to society is the harmful displacement of employment involving manual repetitive labour. While such concerns have existed for years, it was only from 2022 where generative AI like ChatGPT began to accelerate and materialize job displacements within the employment market.  

Using rose tinted glasses, one would believe that AI undertakes manual repetitive labour and emancipates humans from the drudgery of manual work and frees people to focus on higher level intellectually stimulating tasks. However, this is predicated upon a narrow and elitist belief that everyone wishes to escape manual labour and yearns for non-manual intellectually stimulating work. In reality, the introduction of generative AI tools has already impacted the global freelancer job market where “analysis showed that after ChatGPT’s release [in November 2022], the number of monthly jobs for writing-related freelancers on Upwork declined by 2%, while monthly earnings declined by 5.2%” (Reshef et al., 2023).

In the longer term, with greater AI sophistication, even industries that were previously impervious to AI displacement are increasingly at risk of disruption. Creative occupations requiring “human creativity, such as writing, creative art, and making music” (Timothy, 2023) are facing off against AI visual content generators like Dall-E and SORA which produces visual content instantaneously, easily undercutting what a human artist or videographer can produce.

Even complex technical jobs like software programmers are facing the heat with the launch of DEVIN, the first AI software engineer which aims to be “a tireless, skilled teammate … [able to] plan and execute complex engineering tasks requiring thousands of decisions … [and] recall relevant context at every step, learn over time, and fix mistakes” (Wu, 2024). When even the “creative” human employee is faced with such overwhelming odds against their AI equivalent, the writing is on the wall, either perform or risk being replaced by AI.

Technology optimists may argue that while certain positions have been extinguished, new opportunities have arisen from this nascent AI industry. A cogent case study illustrating these opportunities are prompt engineers who are desperately needed to train, instruct and direct image generation AI systems to accurately create the image of one’s desire. These prompts engineers are in such high demand that early adopters of such AI technologies found themselves being offered “salaries as high as $375,000 and don’t always require degrees in tech” (Nguyen, 2023).

This resulted in AI job euphoria but as quickly as these new jobs emerged, it is now increasingly clear that such jobs are fleeting at best. This is because “future generations of AI systems will get more intuitive and adept at understanding natural language, reducing the need for meticulously engineered prompts … [and] AI language models like GPT4 already show great promise in crafting prompts … rendering prompt engineering obsolete” (Acar, 2023). This constant state of flux within the AI industry perpetually disrupts the status quo, rendering any “new” employment opportunities obsolete as quickly as it came about.

Hence, if the issue of mass scale job displacement is not managed appropriately at the political, economic and personal level, unemployment will lead to mass disenfranchisement within society where “large masses of idle people will further threaten social stability because they could easily succumb to demagoguery” (Kile, 2013)

AI Fears: Artificial General Intelligence (AGI / Skynet)

The last but most catastrophic of fears pertaining to the proliferation of AI is the unexpected development of rogue Artificial General Intelligence (AGI). AGI is artificial “intelligence equal to humans [where] it would have a self-aware consciousness that has the ability to solve problems, learn, and plan for the future … [making it] indistinguishable from the human mind” (IBM, 2024).

Consequentially, the main concern is how a “superintelligent AGI poses an existential risk to humans because there is no reliable method for ensuring that AGI goals stay aligned with human goals [resulting in the] perceived risk of a world catastrophe or extinction from AGI” (Mandel, 2023).

In May 2023, hundreds of executives from the top global AI firms proactively signed an open statement calling for the “risk of extinction from AI” (CAIS, 2023) to be made a global priority and OpenAI (Creators of ChatGPT) proactively called for AI regulation, requesting for an “international authority that can inspect systems, require audits, test for compliance with safety standards … [so as to] reduce existential risks” (Altman et al., 2023).

Later in November 2023, “Nvidia CEO Jensen Huang [even forecasted that] artificial general intelligence will be achieved in five years” (Mok, 2023) further stoking AGI fears and while such predictions are still rooted in theory, the threat of a rogue AGI materializing and wiping out humanity has become a foreseeable dystopian future we should all be acutely aware of.

Doom & Gloom?

Apart from AGI apocalypse, the other AI fears with near term negative consequences, are known knowns and it is a matter of garnering the political and social will to move at speed to enact limit breaker measures to mitigate these downsides.

These forecasted problems associated with AI are surmountable problems. Instead of possessing an irrational fear of the untested, one would be wiser to adopt a cautiously optimistic outlook to AI and automation so as to ride the waves of technological advancements while adroitly managing the risks with sensible preventive guardrails.

Such a methodology is echoed by Australian Minister for Industry and Science, Hon Ed Husic who believes that “AI presents a huge opportunity to boost productivity, but we must help make sure the design, development and deployment of AI technologies is safe and responsible” (Reuters, 2023), enabling the extraction of concrete benefits while not rashly plowing head first into any new tech that comes along.

AI Governance & Regulatory Framework:

To safely enjoy these AI benefits, the deployment of such services must go hand in hand with AI regulation so as to guard against the dangers covered above. While Australia does not yet have AI-specific legislation, it uses voluntary mechanisms like the “AI Ethics Framework to guide the responsible design, development and implementation of AI”. However, given the government’s “pressure to prevent AI from causing harm” (Reuters, 2023), it may be an inevitability before Australia introduces AI legislative safeguards to proactively get ahead of the problem.

Should this happen, Australia can draw inspiration from the European Union’s AI Act which has a strong emphasis on reducing AI harms where “all AI systems considered a clear threat to the safety, livelihoods and rights of people will be banned” (Commission, 2024). While this approach comprehensively shields people from AI harms, it’s unequivocal nature to ban even “toys using voice assistance” (Commission, 2024) may induce a chilling effect on AI innovation and cause countries to miss out on AI benefits owing to over-regulation.

Comparatively, a lighter touch regulatory approach to consider would be the UK’s unabashed “pro-innovation” principle which seeks to “balance innovation and safety … [while avoiding the EU’s] more prescriptive legislative measures”, allowing its “existing regulators to interpret and apply within their own remits to guide responsible AI design, development, and use”. In so doing, it provides “businesses with the necessary regulatory clarity to adopt and scale their investment in AI, thereby bolstering the UK’s competitive edge” (Gallo & Nair, 2024).

Regardless of when Australia decides to enact its AI laws, AI regulation appears to be a norm for the future and Australia now has the benefit of hindsight, to choose the most suitable AI policies across multiple jurisdictions to fit its unique considerations and safeguard its interests.   

Exemplar AI Case Study: Klarna

Despite the present unregulated-nature of AI, there are already exemplary instances of AI implementation where the upsides of AI are maximized while keeping the drawbacks at bay. One such instance is the AI automation of repetitive functions and services like customer service, which ideally demands immediacy and accuracy when customers desperately require assistance.

In February 2024, global payments company Klarna publicized its AI success, showcasing how its “AI assistant handles two-thirds of customer service chats in its first month”, the equivalent workload for 700 full-time human live agents while maintaining customer satisfaction scores and reducing inaccuracies in help resolution (Klarna, 2024).

It was said that customers were able to receive the appropriate help in two minutes as compared to the eleven minutes before and this service is available 24/7. This is a perfect example of how AI radically solves a perennial “agent-shortage” problem leading to an estimated USD $40 million profit improvement and enhanced customer satisfaction while not exploiting or overworking employees.

Klarna has proven that when businesses choose to ethically implement AI and not view profitability as a zero-sum game, it is possible to achieve a beneficial outcome for all stakeholders, where AI and human interests co-exist in harmony.

Conclusion:

It is now clear that while AI brings about tremendous uncertainties and dangers, it is equally capable of unprecedented social good. The key of which is the human agency involved in the deployment and application of such AI systems.

Whether an AI system is used to brutally displace human workers in the blind quest for obscene profits or if AI is used to solve a genuine “wicked problem” that benefits all stakeholders, these are human decisions that drive the future direction of AI and as it stands, humans are very much in control and have every autonomy to make decisions which would benefit humanity instead of allowing unscrupulous entities to hijack AI advancements and make humanity foot the bill for their selfish wants.

As the saying goes “the only thing more dangerous than a weapon is the person who wields it”, embracing artificial intelligence under the auspices of an ethical AI regulatory framework is the way forward in which humanity can both comprehend and responsibly utilize AI.

References:

RMIT, O. (2024, March 6). Aussies in denial about impacts of Generative AI. RMIT Online. https://online.rmit.edu.au/blog/aussies-denial-about-impacts-generative-ai

Dillu, D. (2023, October 8). How over-reliance on AI could lead to cognitive atrophy. Medium. https://medium.com/neuranest/how-over-reliance-on-ai-could-lead-to-cognitive-atrophy-d04d214c7e75

Yan, W., Li, J., Mi, C., Wang, W., Xu, Z., Xiong, W., Tang, L., Wang, S., Li, Y., & Wang, S. (2022, September 21). Does Global Positioning System-based navigation dependency make your sense of direction poor? A psychological assessment and eye-tracking study. Frontiers. https://www.frontiersin.org/journals/psychology/articles/10.3389/fpsyg.2022.983019/full

Timothy, M. (2023, April 4). Why even creative jobs are not safe from ai. MUO. https://www.makeuseof.com/why-creative-jobs-arent-safe-from-ai/

Wu, S. (2024, March 12). Introducing Devin, the first AI Software Engineer. Cognition. https://www.cognition-labs.com/introducing-devin

Acar, O. A. (2023, June 8). Ai prompt engineering isn’t the future. Harvard Business Review. https://hbr.org/2023/06/ai-prompt-engineering-isnt-the-future

IBM. (2024). What is strong AI? https://www.ibm.com/topics/strong-ai

Mandel, D. R. (2023, November 15). Artificial General Intelligence, Existential Risk, and human risk perception. arXiv.org. https://arxiv.org/abs/2311.08698

CAIS. (2023, May 30). Statement on AI Risk: Cais. Statement on AI Risk | CAIS. https://www.safe.ai/work/statement-on-ai-risk

Altman, S., Brockman, G., & Sutskever, I. (2023, May 22). Governance of superintelligence. https://openai.com/blog/governance-of-superintelligence

Klarna. (2024, February 27). Klarna AI assistant handles two-thirds of customer service chats in its first month: Klarna International. Klarna AI assistant handles two-thirds of customer service chats in its first month | Klarna International. https://www.klarna.com/international/press/klarna-ai-assistant-handles-two-thirds-of-customer-service-chats-in-its-first-month/

CRB, & team, T. T. M. (2013, November 21). Is Google making us neglect our own memory?. Tech Monitor. https://techmonitor.ai/leadership/strategy/is-google-making-us-neglect-our-own-memory

Reshef, O., Hui, X., & Olin Business School. (2023, October 24). AI tools cause a decline in freelancer work and income – at least in the short run. Council on Business & Society Insights. https://cobsinsights.org/2023/10/24/ai-tools-cause-a-decline-in-freelancer-work-and-income-at-least-in-the-short-run/

Nguyen, B. (2023, May 2). Ai “Prompt Engineer” jobs can pay up to $375,000 a year and don’t always require a background in Tech. Business Insider. https://www.businessinsider.com/ai-prompt-engineer-jobs-pay-salary-requirements-no-tech-background-2023-3

Kile, F. (2013). Artificial intelligence and society: a furtive transformation. AI & society, 28(1), 107-115.

Mok, A. (2023, November 20). Nvidia CEO Jensen Huang says Artificial General Intelligence will be achieved in five years. Business Insider. https://www.businessinsider.com/nvidia-ceo-jensen-huang-agi-ai-five-years-2023-11

Reuters, T. (2023, December 20). How is AI regulated in Australia? what lawyers should know. TR – Legal Insight Australia. https://insight.thomsonreuters.com.au/legal/posts/is-ai-regulated-in-australia-what-lawyers-should-know

Commission, E. (2024, March 6). Ai Act. Shaping Europe’s digital future. https://digital-strategy.ec.europa.eu/en/policies/regulatory-framework-ai Gallo, V., & Nair, S. (2024, February 21). The UK’s framework for AI Regulation. Deloitte United States. https://www2.deloitte.com/uk/en/blog/emea-centre-for-regulatory-strategy/2024/the-uks-framework-for-ai-regulation.html

Be the first to comment

Leave a Reply