In their recent article published in Ethics & International Affairs, Neil Renic and Elke Schwarz assess the moral challenges that arise with the emergence of autonomous weapons systems. They problematize the potential use of lethal autonomous weapons systems (LAWS) by drawing comparisons with patterns visible in historical cases of systematic killing, such as Nazi practices of human classification. The authors argue that both perpetrators and victims undergo a process of dehumanisation when AI-driven lethal force is employed. On the one hand, as targeting becomes a computerised routinized process, human authority, control, and a sense of responsibility on behalf of those executing violence diminishes. On the other hand, human targets become objectified, stripped of their rights, and risk being falsely identified as combatants. At its core, the article challenges the idea that LAWS can be more ethical agents than humans and warns for the ways in which they can potentially reproduce, or even intensify, patterns of systematic killing we have seen in the past. The increased use of AI-infused weapons systems, such as “Habsora” that Israel uses to produce an unprecedented amount of targets and civilian harm in Gaza, only underlines the need for thorough moral and political considerations with regard to these novel technologies in warfare.
Click on the reference or read the article at the bottom of this page.
Further reading on the topic and how it relates to Gaza:
- Schwarz, E. ‘Devalued Humanity: The Status of Human Life in Times of Nihilistic War’. OpinioJuris, 13 March 2024.
- Renic, N.C., and E. Schwarz. ‘Inhuman-in-the-Loop: AI-Targeting and the Erosion of Moral Restraint’. OpinioJuris, 19 December 2023.
Image source: Ministerie van Defensie