As Israel’s air campaign in Gaza enters its sixth month after Hamas’s terrorist attacks on October 7, Lauren Gould, Marijn Hoijtink, and Linde Arentze reflect on how the IDFs use of AI is increasing the speed of targeting and scale of civilian harm in unprecedented ways in a piece in The Conversation.
As AI-enabled targeting is being employed across theatres of war and in densely populated urban contexts, it is crucial to slow down and take stock of what the use of AI in warfare actually means. The authors argue it is important to do so not from the perspective of those in power, but from those officers executing it, and those civilians undergoing its violent effects in Gaza.
Their focus highlights the limits of keeping a human in the loop as a failsafe and central response to the use of AI in war. Instead, they show that as AI-enabled targeting becomes increasingly computerised, the speed of targeting accelerates, human oversight diminishes, and the scale of civilian harm increases.
Over and beyond exacerbating deaths, injuries, and destruction, Gould, Hoijtink and Arentze illustrate that a compounding effect of algorithmic warfare is a psychic imprisonment where people know they are under constant surveillance, yet do not know which behavioral or physical “features” will be acted on by the machine.
To delve deeper into their insights, read the full piece in The Conversation here.
Gould, L., L. Arentze, and M. Hoijtink. 2024. ‘Gaza War: Artificial Intelligence Is Changing the Speed of Targeting and Scale of Civilian Harm in Unprecedented Ways’. The Conversation.
Image by Hosny Salah