RAW director Jessica Dorsey, together with Dr. Marta Bo (TMC Asser Instituut), Prof. Ingvild Bode (University of Southern Denmark) and Prof. Elke Schwarz (Queen Mary University), contributed to the UN Secretary-General’s report on the use of AI in the military domain and its implications for international peace and security. Their submission, made as a group of academics with expertise in military AI’s legal, ethical, and political dimensions, was provided under UN General Assembly resolution 79/239 adopted in December 2024.
The experts’ input centers on the need for a more concentrated focus on AI-based decision support systems (AI-DSS) used in military targeting, which are technologies that help gather data, analyze threats, and suggest possible actions. Unlike autonomous weapons, AI-DSS do not directly engage targets, yet their influence on how military decisions are made is significant and often overlooked. These systems are increasingly used across all stages of the targeting process, including identifying, prioritizing, and engaging military targets. While AI-DSSs are designed to assist human decision-makers, they often shape and at times constrain human judgment, raising serious concerns about accountability, oversight, and compliance with international humanitarian law.
The submission emphasizes three core points:
- The growing influence of AI-DSS on human control and legal decision-making must not be underestimated.
- Current governance frameworks have largely ignored these systems, focusing instead on fully autonomous weapons.
- Clear safeguards are needed to ensure that human reasoning remains central in the use of force decisions.
The submission calls for increased transparency, regulation, and dedicated international attention to military uses of AI-DSS in targeting.
Read the full submission below and the Secretary General’s call here.
Photo by Tima Miroshnichenko