Op-ed: Israel’s use of AI in Gaza War is morally unacceptable

The IDF’s military-technological superiority in the Gaza War should not in any way be mistaken for a moral superiority.

That is what Dr. Marijn Hoijtink and Robin Vanderborght, scholars of International Relations at the University of Antwerp, argue in an op-ed in the Flemish daily newspaper De Standaard. In their piece, Hoijtink and Vanderborght reflect on Israel´s use of autonomous targeting software and heavy bombing in Gaza in response to Hamas´ October 7 attacks. 

Recent independent research shows that Israeli airstrikes and ground attacks constitute war crimes. UN experts describe them as “clear violations of international humanitarian law”, and speak of a “genocide in the making”. Others continue to defend Israel’s attacks on Gaza, but as Hoijtink and Vanderborght argue, their arguments lack merit.

One commonly cited argument is that Israel´s response is an act of self-defense. While the applicability of this right in Gaza – deemed occupied territory by many – is heavily debated, it is certain that Israel must always consider the principles of proportionality and military necessity. The exceptionally high number of civilian casualties and the level of destruction in Gaza since October 7 demonstrate that Israel’s exercise of this right – applicable or not – is disproportionate.

A second argument that is often invoked is a moral one. Unlike Hamas, Israel claims to make efforts to protect civilians by issuing warnings in advance of attacks. Additionally, Israel emphasizes the use of “smart” precision weapons, employing artificial intelligence (AI) to swiftly and efficiently identify and neutralize Hamas targets, minimizing harm to innocent civilians. According to Hoijtink and Vanderborght, this argumentation confuses the IDF’s technological superiority with moral superiority.

Collateral damage

Hoijtink and Vanderborght deem the high-tech discourse behind the argumentation of a moral war problematic. Israel’s violations in Gaza specifically show that AI technology and widespread automation lead to more attacks and, consequently, more civilian casualties – despite claims of precision. Moreover, the technologies experimented with by the Israeli military in Gaza exceed moral boundaries. 

Israel’s military-technological superiority is undeniable. It is well-known that the nation has a lucrative arms industry that profits from the “battlefield-tested” trademark of the products it produces. The Israeli occupation of Palestinian territories has a clear technological dimension, utilizing biometric and surveillance technologies extensively to control and restrict Palestinian civilians.

In recent years, like other major military powers such as Russia, China, and the US, Israel has increasingly focused on integrating AI into its military operations. In 2021, Israel claimed it was conducting its first “AI war” during military operations against Hamas. An Israeli officer declared in early November to The Jerusalem Post that the Israeli military uses AI to “quickly and accurately identify targets” in Gaza. Given the significant number of bombardments on Gaza in recent weeks – over 15,000 according to the latest figures – it is evident that AI-driven and automated software is behind these attacks.

For long, academic research has pointed to the fallibility of such technology in identifying and attacking targets, especially in complex operational contexts like Gaza. Now, as AI is widely deployed in Gaza, we clealy see the consequences of these new ‘AI wars’, as they result in the ability to carry out many more attacks in a shorter timeframe. In the first week after the October 7 attacks, Israel dropped as many ‘precision bombs’ as the United States did throughout the entire 2019 campaign in Afghanistan. Even if one were to assume that AI enables Israel to target with more precision – a claim experts doubt – this does not outweigh the additional civilian casualties resulting from the increased number of attacks.

Human control

The international community has been debating the use of AI for critical military decisions, such as target selection. While there is no international consensus on regulating military AI, the necessity of a form of ‘meaningful’ human control in critical attack situations is a broadly accepted moral norm. 

Israel’s use of AI in Gaza crosses this moral boundary. If over 15,000 targets are eliminated in a matter of weeks, questions arise about how ‘meaningful’ human control can be maintained. Do human supervisors have enough time to thoroughly verify whether targets are correctly identified and selected? Do they have the time to analyze whether the number of civilian casualties is proportionate to the expected military gain – as prescribed by international humanitarian law? These are concrete assessments that must be made by human supervisors, not by an algorithm primarily used to expedite and streamline the targeting process.

Many questions about the applications and consequences of AI in contemporary warfare remain unanswered. Still, it is clear that in the past weeks, new steps have been taken that were previously considered morally unacceptable by a significant portion of the international community. 

The maintained illusion of a precise and moral war obscures both the reality in Gaza and Israel’s responsibility for it. The only way to prevent more civilian casualties is through a permanent ceasefire, followed by the development of a political solution. The use of AI-driven precision weapons is a slippery slope that will only lead to more innocent victims and more destruction – not only in Gaza but also in the wars to come.

Image: A column of smoke resulting from an Israeli airstrike near the Al Khalady Mosque in Northern Gaza