The Road Ahead for the Regulation of Autonomous Weapons

When the UN’s Convention on Certain Conventional Weapons (CCW) Sixth Review Conference began in December 2021, the world had high hopes. For several states and numerous non-state actors, the ideal outcome would have been an agreement to negotiate a legal instrument regulating lethal autonomous weapons systems (LAWS). Such an agreement would have opened a much-needed pathway towards greater control and responsibility over these controversial technologies in a time where the speed of innovation ever accelerating. However, by the end of the conference no such agreement had been reached. How did this happen? Given that almost all of the 125 parties to the CCW agree that some form of regulation is needed to prevent the possibility of widespread proliferation of LAWS, why did it fail to reach an agreement on a mandate?

The CCW and LAWS

The core purpose of the CCW is “to ban or restrict the use of specific types of weapons that are considered to cause unnecessary or unjustifiable suffering to combatants or to affect civilians indiscriminately”.  To achieve this the CCW has a special structure consisting of two important pillars: the “framework convention”, which codifies the rules and procedures for how the CCW is run; and the “protocols annexed to the convention”, which contain the agreements on limitations or bans of specific weapons. In the past, these protocols have covered weapons such as landmines and blinding laser weapons, both deemed to be either indiscriminate or unnecessarily cruel. LAWS, and in particular fully autonomous weapons, could potentially fall into this category, and so in 2014 the committee began talks on banning or regulating them.

The Sixth Review Conference

Entering into the Conference, there was a sense among “pro-regulation” actors that this was a ‘make-or-break’ moment. Primary arguments for regulation of LAWS were that they increase the risk of civilian harm in conflict, issues of accountability, the possibility of these weapons reaching the hands of malicious actors such as terrorists, and the potential of lowering political costs for going to war (thus increase the chances of conflict). Underlying most of these concerns is the concept of autonomy, the power over life and death, and the concept of “meaningful human control”, which at its lowest acceptable threshold means that “a machine applying force and operating without any human control whatsoever is broadly unacceptable”. Meaningful human control would require that the decision to fire a weapon and harm a human is never taken by an autonomous system without human intervention: there is always a human inspecting and controlling the final action of the machine. This argument was raised by the UK during the CCW, pointing out that they “oppose the creation of these systems without meaningful human involvement as they could not satisfy the requirements of international humanitarian law”. However, this is easier said than done, as the difficulty with this notion is that there is no international consensus yet on what meaningful human control must entail, or whether this is possible within LAWS. See this article by Aoife Keogh for a deeper discussion of autonomy and human control.

Of particular concern is that once a threshold of autonomy is crossed, it will be impossible to stop further development. It would be a sign of safety, security and stability if an international treaty could be established regulating autonomous weapons so that they will never reach this threshold. One of the most prominent actors in fighting for this treaty is “Stop Killer Robots”, a global coalition incorporating more than 180 partner organisations worldwide. Amnesty International, one organisation in the steering committee, argues that “a total ban on the development, deployment and use of ‘killer robots’ is the only real solution”. The coalition’s main goal and hope for the CCW would have been a positive step towards the prohibition or meaningful human control of autonomous weapons is possible. Instead, the CCW merely produced a commitment to keep discussing this issue in future conferences, falling short of even an agreement to negotiate a binding instrument, let alone an instrument itself. Following this “non-agreement”, Senior Advisor at Amnesty International Verity Coyl, voiced concern that with ongoing rapid advances in development of LAWS, “the window of opportunity to regulate grows ever smaller”.

The inability to create binding regulations, or even to commit to a process aimed at such regulations was especially frustrating given that the majority of states were willing to take steps towards more regulation. However, given that the CCW operates on a consensus- based procedure where all states must agree to any regulation or adjustment, a small group of states were able to prevent such action. Russia, the United States, Israel and India adjusted or deleted many of the most binding parts of the report towards the end of the CCW process, resulting in the eventual outcome of an agreement to keep the talks going, but with no meaningful outlook for a legally binding instrument of regulation. They point out it would be “premature” to engage in binding regulation despite the fact that exactly these states are the most engaged and advanced in developing new technologies.  What is especially striking is that it was solely highly- militarised countries who acted to prevent regulation of LAWS.

Matthew Anzarouth points out that the U.S. which argued that the “ambiguity of such an instrument might be attractive to some but would cause paralysis and politicisation”, has been investing heavily in the applications of AI as well as automated decision making. The superpower appears to see advantages in these systems, such as greater operational efficiency and the potential for reducing risk for their soldiers.

The Road Ahead

There are different opinions on how we should approach this new disruptive technological development. What is clear is that the pace of innovation is accelerating beyond the boundaries of current regulation, and the failure of the CCW will do nothing to close this growing gap.  That is why it is so devastating that the CCW did not achieve what it is known for: providing a flexible space in which states with strongly different viewpoints reach an agreement based on military interests while also considering humanitarian principles.

Controversially, there are voices who argue that this inability of the CCW to achieve its mandate might present new opportunities for the international community. Following the convention there has been movement among pro regulation states to find a common ground to debate and set up rules in the form of an alternative forum to the CCW. Jeremy Kahn shows how such an “alternative forum” could be found in two different scenarios. The first could be that the “Group of Eight” becomes an alternative forum to continue talks. This group refers to eight highly industrialized countries with the goal of finding a common ground for global concerns. A second option would be the possibility of a bespoke international convention, as with the successful bans on anti- personnel landmines and cluster munitions. These bans were made possible through single states taking the initiative to invite other states to try to come to a resolution, and this may be the most efficient scenario for negotiating binding regulations outside of the UN.

Nonetheless, there are drawbacks regarding this scenario. One of them is that it requires huge financial effort by the hosting country of such an international convention. In consequence not every country that would like to is also able to finance it. Another drawback is that major players in LAWS development like China and the U.S. will most likely not opt to join such a convention. Of course, an international treaty without these states would still be better than none at all, and treaties like the Ottowa treaty and the recently passed Treaty for the Prohibition of Nuclear Weapons (TNPW) can exert a large amount of pressure even on non-member states. In this way they can help to create new norms that may in time shape all actors’ behaviour.

The above mentioned scenarios are possible, they have their own merits and weaknesses. As attempts to address LAWS continue, many fundamental questions remain unanswered. The central one is defining the concept of “meaningful human control“ and thus the core questions remain: What are the commonly agreed criteria for LAWS, where will we draw the line between what qualifies as LAWS and what does not?  And will we in the end be able to control this controversial, disruptive technology?