Podcast: The future use of autonomous lethal drones

In this 9th episode of the ‘Lethal Autonomous Weapons: 10 things we want to know’ podcast series, Professor Dr. Paola Gaeta and PhD candidate Abhimanyu George Jain interview Kenneth Payne, a reader in International Relations at King’s College London. The episode covers the controversial topic on the use of Artificial Intelligence (AI) and its replacement to humans when carrying out drone strikes. The main questions and considerations that arise throughout this episode are the differences between humans and technology, and whether AI will make it easier to go to war.

Payne perceives AI as a rapid decision making technology rather than a weapon due to its ability to execute tasks quicker than humans do. AI lacks emotion and intuitivity compared to humans, and can therefore remain apathetic. This characteristic supports their speedy identification in patterns and connections from large data sets. Payne therefore indicates that the things humans find challenging, the computer can do well, whereas what humans can do instinctively, AI cannot do. His argument touches on the rationality of AI technology and its ability in making decisions in warfare, yet he loses sight of the blowback and civilian harm effects from AI’s drone strikes. Payne neglects the limitations to AI and how its strikes can still be inaccurate and can cause collateral damage. Examples of AI’s miscalculations can be seen in the cases of the U.S.’ drone strikes in Afghanistan, Libya and Pakistan.

Prof. Dr. Gaeta and PhD(c) Jain question Payne on the role of risk adversity. Payne indicates that societies have become more reluctant in using their own force and people in war. In effect, the use of autonomous armies may contribute to societies’ willingness to exert force. A follow-up question is asked by PhD(c) Jain on whether using AI will make it easier to go to war. Payne indicates that this is not necessarily true, it does, however, depend on the target range. He provides an example that wars can become less personal when there is a greater range involved using AI. This reduces political costs as well as deaths among soldiers. Therefore, AI’s capabilities and advancements in speed and accuracy may increase the use of AI in the future.

Although Payne analyses AI and the use of it as a tool to waging war from a strategic perspective, he overlooks the direct and in-direct effects this has on civil society. He focuses on the rationality and capabilities of AI, yet he disregards the humanitarian side to warfare. Payne neglects discussing the role of accountability and transparency among governments and militaries who use AI during drone warfare. His overall outlook on AI is that it will be used more extensively, however, this is a risk in itself, which he does not acknowledge. A rise in the use of AI may risk normalising it in conducting drone strikes.

This post was written by IRW LAB student Eva Akerboom.