The course of the global digital transformation suggests an open-ended process where digital technologies (like AI) are integrated across markets, government and society. Younger generations have already embraced an ‘always-on’ lifestyle.
The number of Internet of Things endpoints (physical devices that are connected to the internet) is projected to rise to 41.6 billion by 2025 worldwide. This includes an expanding array of connected physical objects, from consumer wearables, self-driving cars and smart homes, to robots and sensors for digital manufacturing. Critical infrastructures, in turn, hinge on complex interconnected industrial control systems to meet optimisation demands.
Securing this expansive attack surface is becoming disproportionally challenging for network defenders. The interconnectedness of networks and systems creates a world of opportunity for cybercriminals.
As cybersecurity teams deal with thousands of alarms every day, alert fatigue drains the industry and bewilders professionals. In this extremely difficult to map landscape, demand for speed and scale in threat intelligence makes up the entry point for artificial intelligence and machine learning (AI/ML) digital defence systems.
But AI also has its downside – offensive action launched from dark corners of cyberspace. In the future, reduced AI/ML costs and expanding expertise will be available to a complex ecosystem of malicious actors made up of states and their proxies, including those tied to organised crime. It is not just the mundane task of patching up cybersecurity inefficiencies that solicits AI/ML innovation, but the race to get ahead of the curve in an increasingly adversarial geopolitical environment.
Where are we now, and what is coming next?
Different ML techniques training AI systems have unique timelines within which expectations might be met.
First, systems generally differ as to which tasks they perform best. For example, systems trained through supervised learning learn to classify objects. Those trained via unsupervised learning are good at finding patterns which might not otherwise have been identified or labelled by humans.
Systems also differ on their degree of generality, that is, the degree to which they can apply their training across different situations and environments. Different training methods are said to produce systems with different generalisability capacities. Tasks taught through supervised and unsupervised learning are rather specialised. With reinforcement learning, however, systems could be adjusting their behaviour to new environments or tasks. This is of course an oversimplification. It is widely acknowledged that high generalisability is far from realised.
These systems are at different levels of maturity or technological readiness. Those trained via reinforcement learning are at more experimental stages, and in fact, most commercialised models are developed through both supervised and unsupervised techniques. They therefore feature both associated capabilities: classification and pattern recognition.
Finally, existing and anticipated capabilities correspond to different cybersecurity operational requirements. The tasks performed by systems currently in use – classification and pattern recognition – mostly answer to threat intelligence needs, such as the detection of malware and anomalies (including behavioural anomalies). By contrast, self-configuration capabilities, like self-patching and self-propagation, are at experimental stages, and are associated with techniques such as reinforcement learning.
What do these different deployment timelines mean for cybersecurity? We asked two leading experts for their take on the state of play, future implications and policy recommendations, to get a grasp of things changing in cyberspace.
Starting out: A new way of looking at the world
Dr Tim Stevens from King’s College London : “Machines could increasingly generate knowledge about cybersecurity threats to inform decision-making….This requires us to embrace ‘a different way of looking at the world (…), a mode of calculation that puts data before hypotheses, one in which algorithms search for patterns in the data instead of testing patterns already provided for them…..This does imply a disruptive, perhaps even radical, transformation of who or what produces cybersecurity knowledge and how they go about it. It certainly means we need to pay very close attention not only to the practical impacts of AI/ML but also to its political implications.” – Read the full interview of Tim Stevens
Important cues are lost daily due to cognitive limitations and organisational shortfalls, so why should we problematise this way of drawing inferences?
Stevens argues that this radical transformation in “who or what produces cybersecurity knowledge” could have social and political effects that we should be watchful of. He explains that “in an extreme case, where ML is left unattended to generate its own pattern of data (…), the algorithm has no innate way of testing its findings against established notions of justice or ethics, or even in many cases common sense”. When an algorithm judges whether behavioural patterns are deviant or normal, the line between recognition and construction becomes blurred. One resulting problem is “over-confidence in algorithmic objectivity to the detriment of proper oversight and transparency”.
Transparency relates to the issue of ineligibility of algorithmic output, often referred to as the black box problem – the fact that the models are too complex to allow for “inspection and control by human operators”. Given that the thinking process of an AI system remains elusive, how close are we to trusting it with impactful decisions? For Stevens, the answer is clear: “It may well be that there are forms of machine intelligence (…) able to assist in decision-making in important ways, but strategy will remain an art, not a science”.
Moving forward: what implications for cyber engagement?
Machines might not be taking over strategy soon, but they will enable moves that will affect strategists’ behaviour.
Dr Christopher Whyte from the Virginia Commonwealth University : “Things AI enables are going to be the equivalent of moving from hot air balloons to jet planes. Malware employed with AI baked in is going to perform much faster analyses of attack surfaces (…), distribute itself and attack very quickly (…) and at the same time elude forensic investigation for as long as possible.” – Read the full interview of Christopher Whyte
Advantages in speed and scale could exaggerate existing cyberspace dynamics: “There are going to be dramatic problems of information overload, complexity and ambiguity” contributing to “issues with perception and misperception which already plague everything cyber-related”.
Ambiguity and misperception do not only stem from technical complexity. Intrinsic to the black box problem is the human factor, and the inevitably skewed informational inputs sustained within systems via training. Questions of biases are integral to discussions on AI/ML, but what are their effects on the AI-cyber nexus? Whyte describes this as an increasingly adversarial learning environment. “While offensive and defensive capabilities have always evolved in tandem (…), now it is baked into the code from the ground up”.
This feature could augment uncertainty in radical ways. First, AI enables smarter engagement – “even to the point of non-engagement” – and encourages behaviour that breaks with what we currently understand as cyber operations (online exchanges between network defenders and offenders). ‘Poisoning the ruleset of the game’ is an example of the novel but unexpected modes of competitor engagement that AI makes possible. At the same time, we should also account for the extents to which domestic political and institutional conditions drive the course of national innovation: “Data coming from two different societies is inevitably going to produce different assumptions within the code”.
The result could be significantly divergent systems. Their deployment against adversaries in cyberspace could aggravate conflict by obscuring strategic dynamics and convoluting “political engagements between various blocks in the international community”.
How could the cyber-AI nexus be governed? Responses from the policy world
AI/ML models can be understood as sets of systems ‘overlaid on top of the existing internet’. The point Christopher Whyte makes is that “you cannot talk about AI without talking about cyberspace. Cyberspace is this central avenue upon which AI technologies are built, they depend on it, they are resilient and vulnerable at the same time because of it”.
It follows that AI/ML functions are not independent of the technical parameters, normative structures or conflicts that steer behaviour in cyberspace. On the one hand, by virtue of its impact on the information and user layers of cyberspace – parts “closely synonymous with the internet” – “AI could contribute quite significantly to ‘web balkanisation'”.
On the other hand, international cooperation could produce positive externalities in providing opportunities to shape AI development from the start. For this reason he argues that a technical basis for AI development, rather than political affiliation between likeminded states, should be the guiding principle for cooperation, provided that it is universal: “That way, even if divergent systems develop, we will at least have a foundation for understanding this divergence”.
Besides, delineating AI development could create positive feedback loops for cyberspace governance. Whyte expects a “much more dynamic conversation about (…) the development of cyber norms, simply because AI is hard to ignore and is bound up in cyber”.
Tim Stevens zooms in on the substantive parameters that must be clarified to define a desired path for AI deployment: “We need to think very deeply about what can and what cannot be automated’. We must establish “what constitutes a decision and who is responsible for it. We cannot just respond to an aggrieved party by blaming accidental escalation on a machine”.
It is in this context that the EU plays a fundamental role, Stevens argues. The EU continues “to lead the world through the emerging relationships between cybersecurity and AI investment, regulation and public education. The EU has an opportunity to promote and export its vision of safe and secure AI”.
He concludes that the EU should pay closer attention “to the military and defence aspects of AI and cybersecurity”.
Related content:
• A scientist’s opinion: Interview with Tim Stevens about the AI-cyber nexus
• A scientist’s opinion: Interview with Christopher Whyte about the AI-cyber nexus