In the latest issues of weekly news magazines De Groene Amsterdammer and Knack, Realities of Algorithmic Warfare researchers Dr. Lauren Gould, Linde Arentze (PhD), and Dr. Marijn Hoijtink trace the realities of the use of algorithmic and autonomous warfare and its impact on civilian harm in Gaza and beyond. While proponents of algorithmic warfare have long boasted AI will increase the speed and precision in warfare while decreasing the number of civilian casualties, Gould, Arentze, and Hoijtink’s analysis illustrates that this is far from how algorithmic warfare is playing out in reality.
Tracing the entire AI kill chain from civilian harm back to the military officers deploying and the military-industrial-commercial complex developing AI-driven weapon systems allows Hoijtink, Gould, and Arentze to untangle the increasingly complex lines of responsibility in contemporary algorithmic warfare. Their full analysis can be found in De Groene Amsterdammer and Knack. Dr. Marijn Hoijtink also delves deeper into the topic in De Groene’s ‘Buitenlandse Zaken’ podcast series. An earlier English iteration of their argument can be read in The Conversation.
Their analysis starts with recent revelations that the Israeli Defence Forces (IDF) are using AI decision support systems in their targeting practices in Gaza. According to IDF intelligence officers, who spoke to investigative journalists at +972 Magazine and Local Call, the IDF rely on mass surveillance and algorithmic systems to track movements and predict Hamas military membership amongst Palestinians living in Gaza. The use of these technologies has increased the number of buildings identified as targets from 50 per year to 100 per day and allowed intelligence officers to earmark up to 37.000 Palestinians as potential targets. Moreover, the intelligence officers report they have just 20 seconds to decide whether the computer-generated target is legitimate. This AI-driven war has left 39,324 dead and 90,830 injured and destroyed 60 percent of civilian infrastructure in Gaza.
In line with their new research agenda, Gould, Arentze, and Hoijtink illustrate in their analysis why it is important to move beyond just looking at the speculative potential and future use of algorithms in warfare, and towards investigating the realities of how AI is increasing the speed of warfare and the scale of civilian harm on contemporary battlefields. They first examine what systems the IDF are using and trace back the long history of operational experiences, mass surveillance practices, and military-industrial collaborations that enabled their innovation and deployment. Their approach reveals a global military-industrial-commercial complex, which not only caters to the IDF but supplies militaries across the world and is having a lasting impact across battlefields in Yemen, Iraq, Syria, and Ukraine.
Second, the authors zoom back in on the reports by +972 Magazine and Local Call and analyze how Israel’s military use of algorithmic technologies changes human-machine-human interactions in operational contexts. They highlight how AI increases the amount of targets and accelerates decision-making and targeting processes, while responsibility for the harm done is delegated to the technologies and/or their end users. Looking beyond the body counts, the authors show that Israel’s algorithmic practices in Gaza exacerbate known patterns of civilian harm in urban warfare, and furthermore introduce new, under-researched forms of suffering. They conclude that in addition to an increase in deaths, injuries, and levels of destruction, algorithmic targeting practices create a ‘psychic imprisonment’, where civilians are aware they are constantly under surveillance, yet do not know which behavior will be acted on by the machine.