Guest authors Marijn Hoijtink and Marlene Tröstl reflect on the sharp contrast between human empathy for military robots and the dehumanization of local publics that underpins remote warfare.
The Intimacies of Soldier-Robot Relations and the Making of Remote Warfare
For the Love of Robots
A group of soldiers gathers around at a US army test site in the heat of the southern Arizona desert. They are watching a demining robot – 5 feet long and modeled on a stick insect – performing at a live-test event. The robot finds a mine and blows it up. It loses a limb, picks itself up again and continues to move forward, determined to find the next landmine. The scene is repeated several times until the soldiers watch the robot drag itself forward on its last leg. The robotics physicists, who designed the robot is pleased: his robot is performing exactly how it was programmed. The colonel in command of the test event, however, orders the test to be stopped. “Why?” Asks the physicists. “What’s wrong?” The colonel shrugs, points at the burned, crippled machine and says: “This test is inhumane”.
The above scene was recounted by The Washington Post journalist Joel Garreau in 2007 as part of a wider news story about the intimate relationships soldiers develop with military robots. Writing against the background of the Afghanistan and Iraq wars, Garreau saw the deployment of “thousands of battle bots”, which offered new fighting capabilities but also “an unprecedented field study in human relationships with intelligent machines”. He noticed, for example, how soldiers would give names to military robots, or award them medals or “battlefield promotions”. Another article on the topic described how a group of US soldiers in Iraq organized a funeral to commemorate a “fallen” robot, which included a 21-gun salute.
These examples in which soldiers extend human empathy to military robots stand in sharp contrast with the military culture of dehumanization that underpins contemporary forms of military intervention. As underlined by the Intimacies of Remote Warfare program, our warfare today is characterized by “a shift away from ‘boots on the ground’ deployments towards light-footprint military interventions”, involving drone and air strikes, but also intelligence operations and collaborations with local armed forces. This shift entails an actual physical removal away from the battlefield, but also a moral and emotional distancing from the effects of waging war. Military technologies such as weaponized drones play a crucial role in this shift. This is not only because these technologies do much of the actual locating, identification and indeed ‘making’ of targets, but also because they enable a form of military intervention that is presented as more precise, more objective, and more ‘efficient’ (as these technologies are not hindered by the bad feelings that come from recognizing the human ‘behind’ the target).
Set against the opening example of this contribution, however, this begs the question of what it means to develop empathic, intimate and caring relationships with military robots, while at the same time morally distancing ourselves from the death and suffering of local populations in warfare, in whose name, in some cases, we are waging war in the first place. How should we think about practices of anthropomorphizing military robots (ascribing them human-like characteristics) and what are the effects?
The Study of Social Robots
To be sure, people developing emotional connections with machines and robots in particular is nothing new. The scientific field of human-robot interactions has long studied how humans engage socially with robots, focusing on the engineering problem of how to program robots that can understand and display social behavior and emotions. MIT roboticist Cynthia Breazeal has helped pioneering the field of social robots by designing the first sociable robot called Kismet in the 1990s. According to Breazeal, robots should resemble biological systems and have affective competencies built into them to be plausible to humans and help facilitate their interaction with humans. Others have warned that it is inevitable that humans treat robots as living beings and have called upon investigating the effects of human interaction with social emotional robots. The work of Kate Darling, for example, recognizes that anthropomorphizing robots can be ‘useful’ as it contributes to human acceptance of robots, but also points out that human designers and operators themselves are affected as well. Darling considers anthropomorphic framing to be particularly concerning within the military domain, where the extension of the motto “leave no man behind” to robots runs counter to the very rationale for why robots were put on the battlefield in the first place – that is, their potential to lower the risk of waging war.
Robo-buddies and their Unbearable Function
Independent of whether military robots are programmed to be social or socialize, they still perform an important social and political function. As illustrated by the article in The Washington Post, military robots are not just means to an end or simply tools that soldiers maintain, rely on and use daily. Rather, the robots in these examples provided soldiers with a sense of control and the ability to (temporarily) escape or take distance from the dangers of the battlefield or violent consequences of war. As Peter Singer describes in his book Wired For War, in “some of the most searing and emotionally stressful events possible”, soldiers “grow to refer or even relate to their robot almost like they would with one of their human buddies”. Military robots are also central in the building of a community or in creating a sense of belonging. Building on the work of feminist scholar Carol Cohn, who studied the performative effects of the highly abstract (and sexist) language defense strategists used during the Cold War, we can see how the very act of anthropomorphizing military robots serves a broader function: making war to some extent ‘bearable’ and thereby possible. Combined also with the notion that military robots allow Western militaries to wage a ‘riskless’ form of war and distance themselves from the battlefield and the destructive consequences of the violence they execute and transfer onto local populations, this makes military robots socially and politically extremely functional to wage war in an era in which there is little democratic support for ‘boots on the ground’ and largescale investments in defense.
Spaces for Critique
However, can the experience of forging intimate and meaningful relationships with military robots also open up a space for critiquing remote warfare and its logic of dehumanization? In their book, Surrogate Humanity, Neda Atanasoki and Kalindi Vora discuss the example of the open letter that was written by four former US drone operators in 2015. In their letter, the drone operators denounced the US drone war and how it institutes “an institutional culture callous to the death of children and other innocents”. For Atanasoki and Vora, the letter and the experiences, emotional distress and psychological disorders described by the drone operators represent “the failure of bureaucratized killing” and opens up a space for critique. Perhaps the same function can be performed by these examples of the empathic and intimate relationships soldiers and military robots engage in and the uneasiness that they generate. Calling out on the discrepancies between how robots are being cared for as opposed to the lives of civilians in conflict and war can be a relevant strategy for critique and one that is increasingly important in a context in which military investments in AI, machine learning and robotics are peaking.
Marijn Hoijtink is Assistant Professor in International Relations at Vrije Universiteit Amsterdam. Her research interests include emerging security technologies, their circulation and relation to risk, militarism, and weapons research. She is the editor of Technology and Agency in International Relations (with Matthias Leese), published by Routledge in 2019. Her current research project, funded by the Dutch Research Council (NWO), focuses on the politics of AI and examines how novel AI applications shape and transform current military practices and forms of decision-making. @HoijtinkMarijn
Marlene Tröstl is a third-year Bachelor student studying Philosophy, Politics, and Economics (PPE) at the Vrije Universiteit Amsterdam (VU) and is currently writing her bachelor thesis highlighting the relevance of Plato’s conception of rhetoric for the analysis of warfare discourse today. Next to PPE, she is doing a research assistantship at VU’s Faculty of Social Sciences where she has been mainly researching the social role of modern warfare technologies as well as the development of AI policy perspectives on EU level. Moreover, she is a co-founder of Initiative Interchange, an international network raising funds for primary education. LinkedIn
Would you like to submit an article?
This article was submitted by guest authors Marijn Hoijtink and Marlene Tröstl. The Intimacies of Remote Warfare is open to publishing high-quality, insightful submissions on topics related to remote warfare. If you are interested in submitting a piece to us, or to exploring potential opportunities for collaboration, please contact our project officer Aoife Keogh at firstname.lastname@example.org.