Interview with Tim Sweijs, Director of Research at The Hague Centre for Strategic Studies.
We should think about how to regulate and control their spread
Why do you think the subject has been gaining more attention recently?
Tim Sweijs: Well, first of all, we have seen major progress in deep learning based on neural network pattern recognition since 2011. As a result, AI systems are transforming entire industries from financial trading, where assets are managed by algorithms rather than by humans, personal advertising, think of behavioural targeting, but also in the transportation industry, where the race for driverless cars is on. These developments are not science fiction: they are very visible and tangible. This has certainly not escaped the attention of defence planners who are actively exploring AI applications in the military context. Leading powers are making considerable investments in AI-related R&D. Last summer the Pentagon set up the Joint Artificial Intelligence Center (JAIC), with a $1.7 billion budget over the next five years. China has established civil-military AI fusion cells. The French government has recently announced a EUR 1.5 billion AI initiative with 100 million dedicated to AI applications in a military context. Russia is also stepping up its efforts, with a clear focus on unmanned systems. The Russian Military-Industrial Commission has even set the ambitious target of making thirty percent of military equipment robotic by 2025.
Much of the debate in this areas seems to be on definitions, especially on what constitutes ‘fully autonomous’. Can we bring evidence to bear to help in this at all, or is it an entirely values-based debate?
Tim Sweijs: We can bring analytical rigor to clarify misconceptions in the discussion or lethal autonomous weapon systems versus autonomous and semi-autonomous and automated.
A Lethal Autonomous Weapon System (LAWS) is a system that can independently identify targets, decide to destroy that target without human involvement. There are differences between fully autonomous, with self-guided selections and decisions, semi-autonomous, with pre-selected targets, and automated, with a predefined set of tasks in a controlled environment.
There are different degrees of autonomy and it is useful to take note of these different degrees. There is a distinction between ‘human in the loop’, ‘human on the loop’ and ‘human out of the loop’. We must also consider purpose: Does the system involve the application of lethal force – in other words is it a weapon system – or not?
There is a fraction there that pertains to LAWS, or those systems that can independently identify and target and also execute the attack on a target without any human interference. This is of course far from trivial, it’s a very important legal and ethical issue. But the broad perspective has been winnowed down in the whole public and European debate as ‘killer bots’. What I want to do as a student of war and someone who’s interested in political, social but also technological trends and how they affect the character of conflict is to say ‘hey, this is a very important topic, we need to discuss this’. Fortunately for about two years we’ve been really actively doing that, but we also need to look at those other applications that are also maybe as important for the future of conflict.
What are the potential advantages of such autonomous weapons?
Tim Sweijs: Robotics and autonomous systems (RAS) systems that do not involve application of lethal force will have advantages. First and foremost they will reduce personnel risk, leaving the dull and dirty stuff to machines, resulting in fewer battle fatalities and casualties. They will change the economics of war.
On LAWS, my personal conviction is that humans should always be in the loop. But, suppose a nefarious actor decides to use LAWS, that actor could possibly be provided with military-strategic advantages because it would be able to move faster than actors that don’t use LAWS.
What about the disadvantages?
Tim Sweijs: RAS mean that there are fewer political constraints on going to war because smaller number of people sent to do the actual fighting. They also democratise violence, by levelling the playing field.
LAWS herald the prospects of machines fighting machines, a new era not just for war, but for mankind. There could also be spiral dynamics when autonomous systems run out of control with unforeseen consequences.
What would you say are the one or two most important evidence-based insights you have developed in this area that you might like stakeholders like those at the European Parliament to take on board?
Tim Sweijs: Distinctions matter. We need to try and regulate LAWS, but should not confuse this with RAS. That essentially comes back to the different degrees of autonomy and there are different purposes to which autonomous systems can be put.
I think that is a very useful contribution that algorithms can make to go from early warning to early action. Then on the physical side, if you talk about actual systems, I also think that unmanned or semi-autonomous systems that can help in removing explosive ordnance, I think they’re also very useful. Small robots that can be deployed in a military context to do reconnaissance and eavesdropping; little helicopters, very small drones, can go out and deploy and send back images. I think that is very useful.
But that is very different from a system that independently identify, targets and and also executes a target. These are lethal autonomous weapons systems that are an issue of concern that we should think about how to regulate and control their spread. This is the evidence-based insight. Details matter.
We also need to combine realism with idealism: The race for AI in the military domain is on. Europe can only have a say and shape its future course if it participates. If we as Europeans, as a value-based community want to help shape the future applications of AI in the military domain we need to be involved, at least.
Is there precedent or evidence from other areas of AI that are gaining prominence today that we can apply to how to resolve this issue in autonomous weapons? Is this similar to people posing the ‘trolley problem’ to self-driving vehicles, for example?
Tim Sweijs: There are two things that come to mind. As I mentioned, an increasing amount of Wall Street trading is done by algorithms, it’s no longer human traders. Once in a while, I think 2011-2012 were particularly extreme examples, these algorithms start engaging in a bidding trade war that escalates as a result of which they start executing a ridiculously large number of trades. That then leads to a rapid fall of the overall market.
The interesting thing is that when people later on started analysing why that happened, often it’s impossible to identify the causes – it’s a black box. Eventually they started building in certain ranges within which share values may fluctuate. It can only fluctuate within one or two standard deviations of the average for the past hour or so. If it exceeds that range, then trades are halted automatically for 10-15s, which is often enough for these algorithms to disengage. If that happens more than a few times an hour then it stops for a longer period of time.
There’s no equivalent of that in war. No one says ‘OK guys, let’s take a time out’ and everyone takes a time out. In my mind that’s very worrisome, especially if certain state actors, military organisations, rush immature AI applications to the battlefield. There’s a lot of friction in the battlefield, a lot of things that could go wrong. That is one insight that flags a pertinence and urgency. We as a community should start thinking about useful analogies to halting trade on Wall Street in a military context. To be quite honest I’ve only just started to think about this. But I think that we would need to start thinking about how we could create certain rules of engagement, whether you could hard code into an algorithm a rule of engagement that it could not target civilians. As I say this out loud it almost sounds naïve, but I think that we should start thinking about how we could write certain rules into algorithms that would prevent this kind of flash crash analogies from happening in a battlefield environment, should military organisations decide to introduce these systems onto battlefields.
In September 2017, the UK’s Ministry of Defence issued updated guidance stating that “the UK does not possess fully autonomous weapon systems and has no intention of developing them. Such systems are not yet in existence and are not likely to be for many years, if at all”.
In light of such comments, is it possible that the attention this area is getting now is disproportionate, and a reflection of current hype around AI?
Tim Sweijs: No: I disagree with the premise that they are not likely to be there for many years and I think that now is the time to start thinking about and working towards a new generation of weapon control regimes to deal with this issue. I don’t think it’s overdone. I think it is an important topic. I think that it is around the corner. I think state and non-state actors are working on this. And I think that the technology is already there. So I think this is something that we should seriously think about and worry about.