A scientist’s opinion: Interview with Tim Stevens about the AI-cyber nexus

Interview with Dr Tim Stevens, Senior Lecturer in Global Security at the Department of War Studies, King’s College London, and head of the KCL Cyber Security Research Group.


In your view, is the introduction of artificial intelligence (AI) and machine learning (ML) technologies bringing about a change of paradigm in cybersecurity? Or is it more about upgrading the current practices and capabilities of human agents?

Tim Stevens profile pictureTim Stevens: AI/ML is a very diverse set of technologies developed for and directed towards a wide range of cybersecurity functions and applications. In some contexts, AI/ML deployment is mainly geared towards automating tasks previously undertaken by humans and rendering them more efficient, particularly when dealing with big data sets. In other contexts, it provides new ways of processing and analysing data to detect anomalous behaviour on the networks in real time, rather than relying on outdated archives of malware signatures. Doing this, however, requires a different way of looking at the world. This is a mode of calculation that puts data before hypotheses, one in which algorithms search for patterns in the data instead of testing patterns already provided for them. In practice, it’s usually a bit of both, but the shift here is towards relying on algorithms to generate significant proportions of what we might call ‘expert knowledge’ about cybersecurity threats. This does imply a disruptive, perhaps even radical, transformation of who or what produces cybersecurity knowledge and how they go about it. It certainly means we need to pay very close attention not only to the practical impacts of AI/ML but also to its political implications.


In your article ‘Knowledge in the grey zone: AI and cybersecurity’, you argue that efficiency requirements and convenience are pushing AI to the forefront of threat intelligence, at the expense of ‘truth’. Can you explain what consequences these organisational choices could have?

Tim Stevens: The organisational decision is to prioritise the production of useful or actionable intelligence. Intuitively, it makes sense to use any means necessary to extract whatever companies and their clients need from huge volumes of poorly structured data. But does the use of AI/ML come at the expense of good or truthful information? In an extreme case where ML is left unattended to generate its own patterns from the data and construct threats, the algorithm has no innate way of testing its findings against established notions of justice or ethics, or even in many cases, common sense. Most cybersecurity AI/ML systems do not act like this, of course – they have humans in the loop, setting and adjusting parameters. The challenge is therefore to harness the relative advantages of humans and machines so that efficiency does not become the only decisive factor in AI/ML use, at the expense of important social and political considerations.


Problems regarding AI training data, prominent biases, algorithmic transparency and liability questions are central to discussions on civilian AI applications. Can these factors also pose threats in the context of strategic cybersecurity and defence? What could these threats be?

Tim Stevens: I am relatively confident that AI/ML is being used to inform rather than determine the strategic cybersecurity decision-making process. There are conversations about how AI/ML can play a stronger role in high-level defence and security, but these are generally speculative. At present, cybersecurity decisions (public attribution of threat actors, determination of counter-responses, etc.) are the preserve of people with the legal and constitutional authority to make those decisions, and rightly so. From my perspective, the challenge of biased AI is greater for warfighting and battlespace management than it is for cybersecurity, as the stakes are generally higher. That said, we do not want to find ourselves in a situation of over-confidence in algorithmic objectivity to the detriment of proper oversight and transparency. The mantra ‘let the data speak for themselves’ might be an interesting approach to big data analytics, but it is not a sound basis for strategic action.


Could human-machine teaming have a positive impact on cybersecurity, in the sense of mitigating escalation dangers, or will it generate new dangers?

Tim Stevens: The cybersecurity loop involves two very different forms of agent – the human and the machine – that ‘think’ in very different ways: people reason, machines calculate. Moreover, neither understands the other in any native or intelligible sense. There are obvious ways in which one can enhance the other, but we should always remain mindful of the potential negative effects of these hybrid forms of agency. At present, we usually delegate the tedium of large-scale data processing to machines designed for that job; we instruct and oversee that process. When major decisions need to be made, it is humans that make those decisions and while they may be imperfect, we tend to bring intuition and reason to that process in a way that machines currently cannot. It may well be that there are forms of machine intelligence that will be able to assist in decision-making in important ways, but strategy will remain an art, not a science. Our main challenge, perhaps, is recognising and understanding where responsibilities lie in these hybrid arrangements, and what the limitations of our current legal and ethical frameworks are, alongside proper communication of risks and opportunities to the public.


In your view, what challenges may emerge from the introduction of AI in cybersecurity given the domain’s characteristics of continuous interaction between offenders and defenders, and in the context of ‘persistent engagement’ approaches?

Tim Stevens: The answer to this question lies entirely in what degree of autonomy is afforded to software agents in responding to threats identified by AI/ML systems. Will they just be flagged for human analysis or, much more likely given the timescales specific to this environment, will they be tasked with responding in particular ways and within set bounds? This process is largely underway and network defenders are experimenting with multiple approaches to these issues. My concern is in establishing what constitutes a decision and who is responsible for it. We cannot just respond to an aggrieved party by blaming accidental escalation on a machine, for instance. The good news is that states are generally reluctant to escalate network skirmishes, whether in-domain or in other environments. The bad news is that software doesn’t think like people. In that context, we need to think very deeply about what can and cannot be automated. Escalation is not the intended outcome of a posture like persistent engagement, but neither is this stance risk-free, particularly if tactical decision-making is devolved to machines. We need to keep a close watch on this issue.


The EU has recently been active in promoting norm-setting initiatives for transparent and ethical AI. How do you view these activities? Have implications from AI deployment in cybersecurity been sufficiently addressed by the EU, and what more can be done in the direction of human-centric AI in this domain?

Tim Stevens: The EU has put its collective values at the centre of its AI initiatives, as well as an acute sense of the importance of appropriate law and ethics. AI-enabled cybersecurity – and the cybersecurity of AI – has figured in several EU communications and is a specific focus of the newly strengthened EU Agency for Cybersecurity (ENISA). These are all welcome developments, and the EU is, in some senses, continuing to lead the world through the emerging relationships between cybersecurity and AI investment, regulation and public education. The EU has an opportunity to promote and export its vision of safe and secure AI. I would suggest that the EU pay closer attention to the military and defence aspects of AI and cybersecurity, a conversation it does not seem willing to have, especially if its military structures wish to remain interoperable with US systems. While national security is not an EU competence, it can help its Member States think through some of the problems associated with AI-enabled military cybersecurity, including in its ethical dimensions.

Related article

Leave a Reply