Pandora’s Technologies: Unleashing and Governing AI

The crimson light devoured the sky, flames raged on, and the dust from the shattered buildings billowed out into the air. The scene of war seemed never to change.

Amidst this war-torn landscape, however, the presence of elite military troops has diminished. In their place, complex algorithms are now predicting enemy movements and automated systems are making decisions on the battlefield in a matter of seconds. Artificial intelligence is replacing the combatants on both sides in the conduct of military force, or rather, in the perpetuation of massacres.

This scenario has been depicted countless times in fictional plots and on screens. Yet, it has now become a grim reality of contemporary conflicts, illustrated by the recent Israel-Hamas war.

AI at the Front Lines: Israel’s ‘Lavender’

A report by The Guardian (2024) reveals that the Israeli military reportedly used a previously undisclosed artificial intelligence (AI) database in their bombings of Gaza. This AI system, called “Lavender,” can identify up to 37,000 potential targets based on their apparent connections with Hamas, mainly marking them as low-level members of Hamas’s military branch and militants, thereby approved for attack and elimination by the Israel Defense Forces.

Users involved with the “Lavender” system also stated, ” They preferred trusting statistical mechanisms to relying on soldiers overwhelmed with grief. The machine’s cold efficiency makes the situation easier to handle.” They added, “During this process, I have 20 seconds to consider each target and need to process dozens of targets a day. My role is limited to final confirmation, and beyond that, I had zero added-value as a human. This saves a lot of time.” They further noted, ” You are willing to accept the risk of using artificial intelligence, including collateral damage and civilian deaths, as well as the risk of wrong attacks, to bear this kind of error.” (McKernan & Davies, 2024).

A report of AI named “Lavender” used by Israel (Democracy Now!, 2024)

While it is essential to clarify that my focus is not on the war’s moral dimensions, the deployment of AI and automation technologies within this context reveals growing issues and implications that transcend borders and battlegrounds. The use of this advanced AI system marks a new phase of high-tech warfare in Israel’s conflict with Hamas. It raises a series of legal and ethical questions, and also alters the relationship between military personnel and machines.

AI – A Critical Look at its Capabilities

To explore in more depth the transformation of modern warfare through technology, we must first understand the basic concepts of AI and automation.

AI usually refers to equipping automated tools with intelligence. These tools substitute for humans in both physical and cognitive tasks, but perform cheaper, faster, and better (Tyson & Zysman, 2022, p. 256). The core and initial purpose of these technologies is to perform tasks that typically require human intelligence with minimal human intervention.

The other view is that AI could be a misleading term. Crawford (2021, pp. 7-9) argues that AI is neither artificial nor truly intelligent. Instead, she proposes that AI is a complex system shaped by social, political and economic forces. It is heavily dependent on material resources, human labour, and existing social structures. AI systems are trained with biased data to serve the interests of those in power. This perspective emphasizes the broader context of AI development, including the political and social factors influencing it, rather than just the technical aspects.

But whichever view you take, you probably have heard of or used some of the most common AI applications, such as Apple’s Siri, Google’s search algorithm and the ChatGPT series. These apps are widely used in education, technology, and business decision support. It is evident that our current work and economic models are being transformed, or even replaced, by new automation and digital technologies (Autor, 2021, pp. 26-27). AI and automation are rapidly evolving and converging in various fields, greatly impacting our daily lives and operational strategies across industries.

Common mobile AI apps in 2023 (Pandya, 2023)

However, building on the understanding of AI and automation’s transformative impact, it is crucial to also consider the potential negative effects these technologies have. The ineffectively regulated use of AI and automation presents significant risks and challenges that require careful regulatory and ethical consideration. This is especially evident when we connect these technological developments to the context of modern warfare.

Dual-use Dilemma

In addressing the specific context of the Israel-Hamas conflict, one crucial argument for AI and automation revolves around the dual-use nature of these technologies. This reflects how the widespread use of AI in civilian applications does not mitigate the safety and liability concerns of its being applied in other domains (Fernández Llorca et al., 2023, pp. 615-616). For instance, in the battlefield scenario, where AI is employed to extensively target and eliminate enemy combatants. Thus, a critical question arises whether this utilization of technology truly surpasses human capabilities in cognitive tasks and decision-making processes. Can AI completely avoid civilian targeting in war? Should the responsibility and morality of those issues be held accountable to AI technologies, or to the people who use them?

Back to the case, deploying “Lavender” in the war demonstrates the AI technology’s ability to be used in surveillance and combat-support roles. From a technological point of view, the application of AI-assisted techniques cannot be accused, but their massive use in simply identifying the enemy through recognition techniques is horrifying. Without strict regulation, this could result in considerable civilian casualties and breaches of international humanitarian law. As Israeli intelligence officials have said:

“Because we usually carried out the attacks with dumb bombs, and that meant literally dropping the whole house on its occupants. But even if an attack is averted, you don’t care – you immediately move on to the next target. Because of the system, the targets never end. You have another 36,000 waiting.”

(McKernan & Davies, 2024)
Palestinian children salvage items amid the destruction caused by Israeli bombing (The Japan Times Editorial Board, 2024).

Necessity, Complexity, and Possible Solution for AI Governance

Necessity

Therefore, blindly trusting in AI and automation would be another derivation of techno-romanticism. We have learnt from the failure of the Californian Ideology that regulatory frameworks and government intervention are necessary for such digital technological forces (Flew, 2021, pp. 32-33).

The Californian Ideology – a philosophical movement that emerged in the late 20th century, represents a fusion of free-market libertarianism and a belief in the transformative power of digital technologies. This ideology merges a strong belief in technological progress with a commitment to free markets and a skepticism of government intervention. It asserts that technology, particularly the internet and digital platforms, can bring about social and economic improvements without the need for traditional regulation and oversight.

(“The Californian Ideology,” 2024)

Complexity

By now you may have a basic understanding of the need to govern AI, but then we meet another problem – how to develop effective regulatory measures for these emerging technologies. As Flew (2021, p. 212-213) points out in his book “Regulating Platform”, the supervision of information technologies is fraught with complexities and difficulties.

One of the problems is that the development of artificial intelligence is gradually spreading into the technological framework all over the world. The governance framework therefore relies on the collaborative efforts of multiple stakeholders to strike a balance in the context of the paradox of competition in the global marketplace. Managing such contradictions requires constant international dialogue and coordination, but the views of different countries on AI use and regulation vary greatly, making it more difficult to reach globally aligned rules.

In addition, the rise of techno-nationalism and the decline of global cooperation is an important cause of regulatory difficulties. Flew used the example of Internet governance in China to suggest that techno-nationalism is increasingly part of the diplomatic policy of the United States, Europe and China. This shift towards techno-nationalism has intensified international competition and diminished prior technological and economic collaborations. Further, the differences between the Chinese and American models of information technology development may lead to contrary choices of AI regulatory strategies and policies (Flew, 2021, p. 212-213).

Crawford (2021, pp. 19-21) also highlighted the issue of power inequality behind AI technologies. This involves examining who sets the standards, develops the technologies, and how they are implemented across societal levels. The inequality of power structures leads to the potential for AI to exacerbate existing social injustices. This injustice includes multiple dimensions of benefits such as ethics and social policy, and there is a need to consider how to distribute the benefits and strike a regulatory balance in the most possible equitable way.

There is also the use of AI as a political tool, particularly in terms of surveillance and propaganda. This politicized use of technology demonstrates the potential of AI as a tool of power, and therefore regulatory policies may be systematically hindered by power holders. Ensuring that AI technologies are not misused for political purposes is a major challenge, which also involves the importance of technological transparency and public participation. The Israel-Hamas war serves as a compelling example of the urgent need for robust, transparent, and equitable AI governance. Without strict oversight, the risks of AI in conflict environments extend far beyond the battlefield, threatening global stability and human security. The complexities highlighted by scholars like Flew and Crawford underscore that while AI regulation presents significant challenges, it is imperative to strive for ensuring that these powerful technologies are employed responsibly and ethically in international relations and warfare.

AI-generated fake images make real images of war more difficult to distinguish (Ramirez et al., 2023).

Possible Solution

Having a clear understanding of the complexities of AI governance helps us to better navigate the direction of future AI regulatory policy. We can draw on the conceptual understanding of governance and the measures proposed by Weber in his book “Shaping Internet Governance: Regulatory Challenges” to analyse the regulation of AI.

Governance, according to Weber (2010, pp. 4-7), refers to the mechanisms and processes through which citizens and groups exercise their legal rights and fulfil obligations. This is a short and simplified interpretation in a general discourse. When applied to the field of AI, I think governance needs to consider additional frameworks and standards to ensure that AI technologies are developed and used in an ethical and responsible manner.

Hence speaking of such regulatory standards, AI governance must involve a wide range of stakeholders, including governments, companies, and the public. This approach ensures different perspectives are considered, leading to a more balanced and inclusive governance framework. The Geneva Declaration of Principles recognises the need for inclusive stakeholder participation both in highlighting public policy issues and the power to address them (Weber, 2010, pp. 5-6). Having multiple perspectives on governance is critical to creating legal and ethical frameworks for using AI.

Then, given the global aspect of AI technologies and their deployment, it is equally important to develop international standards and guidelines for AI management. Yet, establishing international standards for AI is challenging due to the need to balance global directives with local realities, including the respect and integration of diverse cultural and social contexts. However, this requirement is urgent as well, as it enables AI governance frameworks to be adaptable and responsive in the face of new technological updates and uses in a timely manner.

The current governance of AI in warfare, as demonstrated by the Israel-Hamas conflict, still lacks of inclusive and international regulatory framework. Managing AI’s role in warfare necessitates a collaborative approach. Technologies should aim to safeguard ethical principles and human rights. More efforts are needed to reach this goal.

Conclusion

The rapid integration of AI into modern warfare, exemplified by the Israel-Hamas conflict, underlines the urgent need for comprehensive governance. This conflict reveals the profound capabilities and the potential risks of AI technologies, particularly when used in military operations. AI systems, such as the “Lavender” used by the Israeli military, demonstrate the efficiency and precision that AI can bring to warfare. Yet they also raise significant ethical and legal concerns, particularly concerning the safety of civilians and compliance with international humanitarian law.

Therefore, it is crucial to establish and implement robust governance frameworks for AI technologies. These frameworks should include a wide range of stakeholders and ensure a consistent, effective regulatory approach to AI regulation across borders. As AI technologies continue to evolve and impact various facets of life, the lessons drawn from their use in the Israel-Hamas war must inform future regulatory policies.


Reference List:

Autor, D. H. (2021). The work of the future: Building better jobs in an age of intelligent machines. The MIT Press.

Crawford, K. (2021). Introduction. In The Atlas of AI (pp. 1–21). Yale University Press. https://doi.org/10.2307/j.ctv1ghv45t.3

Democracy Now! (Director). (2024, April 6). Lavender & Where’s Daddy: How Israel Used AI to Form Kill Lists & Bomb Palestinians in Their Homes. https://www.youtube.com/watch?v=4RmNJH4UN3s

Fernández Llorca, D., Charisi, V., Hamon, R., Sánchez, I., & Gómez, E. (2023). Liability Regimes in the Age of AI: A Use-Case Driven Analysis of the Burden of Proof. Journal of Artificial Intelligence Research, 76. https://doi.org/10.1613/jair.1.14565

Flew, T. (2021). Regulating Platforms. Polity.

McKernan, B., & Davies, H. (2024, April 3). ‘The machine did it coldly’: Israel used AI to identify 37,000 Hamas targets. The Guardian. https://www.theguardian.com/world/2024/apr/03/israel-gaza-ai-database-hamas-airstrikes

The Californian Ideology. (2024). In Wikipedia. https://en.wikipedia.org/w/index.php?title=The_Californian_Ideology&oldid=1194970120

Tyson, L. D., & Zysman, J. (2022). Automation, AI & Work. Daedalus, 151(2), 256–271.

Weber, R. (2010). Shaping Internet Governance: Regulatory Challenges. In Shaping Internet Governance: Regulatory Challenges. https://doi.org/10.1007/978-3-642-04620-9

Images:

Pandya, J. (2023, December 1). Does the Future of Mobile App Development Belong to Artificial Intelligence? Expert App Devs. https://www.expertappdevs.com/blog/future-of-mobile-app-development-belong-to-ai

Ramirez, M. K., Nikki McCann, Klee, M., & Ramirez, N. M. (2023, October 27). AI Has Made the Israel-Hamas Misinformation Epidemic Much, Much Worse. Rolling Stone. https://www.rollingstone.com/politics/politics-features/israel-hamas-misinformation-fueled-ai-images-1234863586/

The Gospel: Israel turns to a new AI system in the Gaza war. (2023). Al Jazeera. https://www.aljazeera.com/program/the-listening-post/2023/12/9/the-gospel-israel-turns-to-a-new-ai-system-in-the-gaza-war

The Japan Times Editorial Board. (2024, April 5). Israel needs to stop killing civilians immediately. The Japan Times. https://www.japantimes.co.jp/editorials/2024/04/05/israel-killing-civilians/

Be the first to comment

Leave a Reply