Evidence from existing autonomous weapon and AI systems shows that the level of human control can play a crucial role in the current context.
On September 12th the European Parliament (EP) adopted a resolution urging ‘international negotiations on a legally binding instrument prohibiting lethal autonomous weapon systems’ (LAWS). Similarly the United Nations’ Convention on Certain Conventional Weapons (CCW) Group of Governmental Experts (GGE) on LAWS met in Geneva in August 2018. Yet the US, Russia, Israel, Australia and South Korea decided against a proposed resolution to ban fully autonomous weapons. Discussions continued in November 2018 at the CCW’s annual conference, where the GGE presented its report on autonomous weapon systems. The GGE will consider the issue further in 2019, although only for seven days, compared to the ten days spent in 2018. One of the key issues in such discussions is defining what LAWS actually are.
Also, in October 2012, a coalition of non-governmental organizations (NGOs) launched the Campaign to Stop Killer Robots, working to ban the development, production and use of fully autonomous weapons.
The EP’s position is that ‘non-autonomous systems such as automated, remotely operated, and tele-operated systems should not be considered LAWS’. The use of LAWS ‘raises fundamental ethical and legal questions of human control, in particular with regard to critical functions such as target selection and engagement; whereas machines and robots cannot make human-like decisions involving the legal principles of distinction, proportionality and precaution’.
The Stockholm International Peace Research Institute (SIPRI) report ‘Mapping the Development of Autonomy in Weapon Systems’ provides useful evidence to inform this matter.
Vincent Boulanin, researcher, Stockholm International Peace Research Institute, Sweden : “In order to discuss the problems posed by that the future advance of AI in weapons systems, it’s useful to look at what is the state of activity today and why are we content with certain types of applications and what might change with future technological developments.”
The report includes a dataset of 381 military systems with autonomous capabilities. Some of them, mainly defensive systems, can acquire and engage targets autonomously. Generally operated under human supervision, they are intended to fire autonomously ‘only in situations where the time of engagement is deemed too short for humans to be able to respond’. Yet attitudes to and legislation of such systems provide important examples from which more advanced LAWS will evolve.
Vincent Boulanin : “I’m among a number of experts that think that focusing on human control is more useful. Rather than thinking about levels of autonomy, we can think about the acceptable level of human control. How do we want to maintain human control over the use of force? That’s for me the key question and that’s what needs to be agreed on.”
Examples of offensive systems are limited ‘loitering weapons’ like ‘suicide drones’, which are often similar to guided missiles that ‘hang around’ until a target appears. Yet by many definitions these are not fully autonomous, as their humans define their waiting time and range, and the types of targets that they can attack in advance. Overall, the report shows that autonomous weapons are already very diverse, meaning that a precise definition of what exactly constitutes LAWS will likely take a long time and numerous controversial discussions before it comes to light. Therefore SIPRI and other expert groups are seeking to change the legal perspective from “what kinds of autonomy are allowable” to “what levels of human control must be in place”.
Tim Sweijs, the Director of Research at The Hague Centre for Strategic Studies in the Netherlands : “Unmanned or semi-autonomous systems that can help in removing explosive ordnance, I think they’re also very useful. Small robots that can be deployed in a military context to do reconnaissance and eavesdropping; little helicopters, very small drones, can go out and deploy and send back images. I think that is very useful. But that is very different from systems that independently identify, target and also execute an attack on a target. These are lethal autonomous weapons systems that are an issue of concern that we should think about how to regulate and control their spread. This is the evidence-based insight. Details matter.”
Case studies from the rapid advance of artificial intelligence (AI) are also applicable in the discussion regarding the importance of human control of LAWS. For example, algorithmic trading in financial markets caused ‘flash crashes’ which led to rapid stock market declines in 2010 and 2012. With no human involvement at the time, and the difficulty that is frequently inherent in understanding AI’s actions, these events were hard to prevent. Ultimately, restrictions on trading based on the rate of stock price change seem to have brought the situation under control.
Tim Sweijs : “There’s no equivalent of that in war. No one says ‘OK guys, let’s take a time out’ and everyone takes a time out. In my mind that’s very worrisome, especially if certain state actors, military organisations, rush immature AI applications to the battlefield. There’s a lot of friction in the battlefield, a lot of things that could go wrong.”
Such examples underline the crucial importance of the world reaching an agreement on how to regulate LAWS.
European Parliament speaks out against “killer robots”
European Parliament’s September 12th resolution on LAWS
Agenda of November 2018 CCW conference
Statements from the November 2018 Meeting of CCW High Contracting Parties
Campaign to stop killer robots
SIPRI report: Mapping the Development of Autonomy in Weapon Systems
Hague Centre for Strategic Studies report: Artificial Intelligence and the Future of Defense