Practically speaking, in the future, I think soldiers are going to be superheroes who have the power of perfect omniscience over their area of operations, where they know where every enemy is, every friend is, every asset is.Palmer Lucky, Anduril Industries, quoted in Suchman, 2020:8
In this article for Critical Studies on Security, Lucy Suchman links the broad questions of precision and accuracy in US military engagements with the practice of automating of these operations. With the highly contemporary Project Maven as its focus (a DoD endeavour to automate drone footage analysis), it suggests the adoption of AI technologies in this domain “can only serve to exacerbate military operations that are at once discriminatory and indiscriminate in their targeting.”
US militarism determines a threat and response on the basis of situational awareness, which can range from momentary tactical comprehension to longer-term strategic planning. Attempts at refining situational awareness at every level are continually frustrated by the Clausewitzian “fog of war.” Applying AI to warfare (e.g. through Project Maven) is frequently framed as a technological solution to this intractable problem.
A tendency to conflate the relation between a weapon and its target with what constitutes a legitimate target in the first place has served to broadly legitimise contemporary military interventions (for more on this, see The Ambiguities of Precision Warfare). Armed with this knowledge, Suchman delivers a well-reasoned critique of “military technophilia.” In essence, no technological improvements can overcome the fundamental problem of identifying targets on loosely-defined parameters.
Algorithmic warfare presents a timely and forward-thinking offering to the literature on remote warfare. In engaging with the complex political and technical practices which constitute targeting in contemporary military engagements, its sceptical perspective on the promises of technology dissolving the fog of war is likely to age well.