A scientist’s opinion : Interview with Dr. Vincent Boulanin about “killer robots”

Interview with Vincent Boulanin: How do we want to maintain human control over the use of force?

Interview with Dr. Vincent Boulanin, Senior Researcher at SIPRI

How do we want to maintain human control over the use of force?

 


Can you briefly explain how you became involved in autonomous weapons?

Vincent BoulaninVincent Boulanin: I work for SIPRI, an independent think tank that works on of issues related to armaments and arms control. In that context, I started to follow the emerging debate on autonomous weapons at the UN in 2014. I noticed then that many myths and misconceptions about AI, autonomous systems, autonomous weapons, AI, robotics were circulating in the discussion. I therefore started to conduct a study that would give all the stakeholders a better understanding of the current state, and development trajectory, of autonomy in weapon systems. My conviction was that to have a constructive policy discussion you need to be able to have a concrete understanding of the possibilities but also the limitations of the technology. The purpose of this was what to help anchor the discussion on autonomous systems in the current reality of weapon systems development.


What would you say are the one or two most important evidence-based insights you have developed in this area that you might like stakeholders like those at the European Parliament to take on board?

Vincent Boulanin: The first one is that it’s not really helpful to think of autonomous weapons as something that belong to a distant future. Autonomy is already a reality of weapon systems development. Autonomy is currently used to support various capabilities in weapon systems, including mobility, intelligence but also targeting. Weapon systems that, once activated, can acquire and engage targets without direct involvement have existed for decades. They are predominantly delegated to defensive uses, for example, to protect ships, ground installations or vehicles against incoming projectiles. They are operated under human supervision and are intended to fire autonomously only in situations where the time of engagement is deemed too short for humans to be able to respond. Loitering weapons are the only ‘offensive’ weapon system type that is known to be capable of acquiring and engaging targets autonomously. The loitering time and geographical areas of deployment, as well as the category of targets they can attack, are determined in advance by humans.

In order to discuss the problems posed by that the future advance of AI in weapons systems, it’s useful to look at what is the state of activity today and why are we content with certain types of applications and what might change with future technological developments.

The second is that the debate on lethal autonomous weapon systems (LAWS) has been very much focused on the development of ‘full autonomy’. The focus on full autonomous systems is somewhat problematic as it does not reflect the reality of how the military is envisioning the future of autonomy in weapon systems, nor does it allow for tackling the spectrum of challenges raised by the progress of autonomy in weapon systems in the short term. Autonomy is bound to transform the way humans interact with weapon systems and make decisions on the battlefield, but will not eliminate their role. Weapon systems will never be ‘fully’ autonomous in the sense that their freedom of action will always be controlled by humans at some level and their programming will always be the product of human plans and intentions. Even a soldier is not fully autonomous, in that he is always part of a hierarchy and a system of monitors and limits what he can or cannot do. It would be the same for autonomous weapon systems. The key question is then, what control should humans maintain over the weapon systems they use and what can be done to ensure that such control remains adequate or meaningful as weapon systems’ capabilities become increasingly complex and autonomous?

At what point do we consider human control would be too disconnected from the system, so that the latter cannot be used in a lawful or ethical way or for humans? So, basically how do you want to calibrate the human machine interaction? That, for me, is the key concern. Is it sufficient to have a system that is pre-programmed by humans and then fielded? That could be one view. Or do we need to have a human operator constantly supervising the weapon or an in position to take back control? These are the fundamental questions that we need to solve. It’s not easy because there is no ‘one size fits all’. The discussion will depend very much on the type of applications and what type of combat situation that we’re talking about. Are we talking about a counter insurgency scenario or a major conflict between China and the US involving with very advanced weapon systems where things can go really fast? Are we in a land environment, an urban warfare environment, under the water? All these variables will affect the way we think about the need for control and the level of autonomy that we’re ready to give to machines.


The dataset in your report “Mapping the Development of Autonomy in Weapon Systems” on autonomous functions actually deployed looks really valuable. What do you think are the most compelling results that emerge from the information you’ve compiled?

Vincent Boulanin: The difficulty is that the dataset is not comprehensive. It was not physically feasible for us to get all the systems that exist, so it’s just a sample that allows us to understand what’s going on. But I guess the main lesson is that we see many types of autonomy development, and it’s used in many different ways in systems today. The level of complexity and particularly sophistication of autonomy, depending on the type of function that you’re talking about, will vary greatly. Also the way that humans take back control varies. Another interesting thing is that the countries developing these systems are major military powers. We can see some prototypes being developed in smaller countries that have high tech industries, but when you talk about military applications even if some civilian applications could be weaponised, we’re still talking about the same kinds of players. The countries that do matter are the US, China, Russia, France, UK – these are the ones with a proper defence industry and research agencies able to develop very complex programmes that will enable the development of cutting edge capabilities.


Much of the debate in this areas seems to be on definitions, especially on what constitutes ‘fully autonomous’. Can we bring evidence to bear to help in this at all?

Vincent Boulanin: My take on the debate on definitions is that – we are talking about the debate at the UN – I don’t think that it will be possible to find a technology centric definition – i.e. a definition that spells out concrete technical characteristics. Because it will be extremely difficult to agree on terms that will allow you to clearly tell the difference between existing systems and future systems. I’m among a number of experts that think that focusing on human control is more useful. Rather than thinking about levels of autonomy, we can think about the acceptable level of human control. How do we want to maintain human control over the use of force? That’s for me the key question and that’s what needs to be agreed on.


Is it possible that the attention this area is getting now is disproportionate, and a reflection of current hype around AI?

Vincent Boulanin: It’s very much that. It’s a sexy topic. It’s new. I’m not saying that it’s not a very important topic, but if you’re a bit cynical you could argue that there are much more urgent humanitarian issues right now. People are being killed by weapons that are a million years from autonomous; they are just old school, small weapons that have been smuggled since the end of the cold war. You could argue that the international community should rather focus on these urgent issues. At the same time, AI is having a renaissance and that will impact the future of warfare and the international community. The debate on autonomous weapon is useful because it makes the international community discuss the challenges with the military uses of AI. Under the concept of autonomous weapons we’re discussing a lot of different things. Sometimes they are not directly relevant for the convention, but that’s useful.

Related Article