A scientist’s opinion: Interview with Christopher Whyte about the AI-cyber nexus

Interview with Christopher Whyte, PhD, Assistant Professor in the Homeland Security & Emergency Preparedness program at Virginia Commonwealth University’s L. Douglas Wilder School of Government & Public Affairs. Dr. Whyte’s research focuses on issues of cyber conflict strategy with a particular interest in information operations and the emerging impact of artificial intelligence. He is author of numerous scholarly articles on issues of digital insecurity and author of several published and forthcoming books in the area.


Despite the hype surrounding artificial intelligence (AI), there is less discussion on the nexus between AI and the cyber domain. Why is it important?

Christopher Whyte profile pictureChristopher Whyte: You cannot talk about AI without discussing cyberspace. Cyberspace is the central avenue upon which AI technologies are built – they depend on it, they are resilient and vulnerable at the same time because of it. It is the avenue through which AI is enabled and as such, it is going to be targeted and attacked, even beyond the contours of what we currently call cyber conflict or cyber operations.

Similarly, many of the issues we have in debates over cyberspace are the same in AI. We can think of AI as a general purpose set of systems overlaid on top of the existing internet. We do not talk about a fundamentally new architecture for the commons. It enables transformations of other configurations and features of society more than it promises a new architecture for the digital world and society. I am optimistic that we are going to have a much more dynamic conversation about cyberspace and the development of cyber norms, simply because AI is hard to ignore and is bound up in cyber.


Do artificial intelligence and machine learning (AI/ML) technologies entail a change of paradigm for cybersecurity practices or only incremental upgrades in cyber capabilities?

Christopher Whyte: Anybody making the assertion at this point that the paradigm of cybersecurity practice is transforming entirely is making it prematurely. But yes, the paradigm of cybersecurity is definitely going to evolve as AI techniques become more accessible and cheaper to implement, as the expertise needed to build sophisticated models and apply them to cyber instruments gets broader, and as AI becomes commonplace in societal and economic functions.

AI is going to bring dramatic problems of information overload, complexity and ambiguity. Cyber engagements are going to get smarter and faster at the same time. This will add to the issues of perception and misperception that are already plaguing everything cyber-related. The cadence with which AI is going to be felt in high-level political interactions in the international community is going to dramatically increase over the next decade.


Which AI features matter most for cybersecurity?

Christopher Whyte: Speed is probably the biggest game changer. There are a number of things AI enables that are the equivalent of moving from hot air balloons to jet planes. Malware employed with AI baked in is going to perform much faster analysis of attack surfaces. Right now, there is a piece of malware called silver sparrow – it is very sophisticated and has a self-destruct functionality. In the future, such AI-enabled malware will be able to do things that will make cybersecurity forensics analysis much, much harder to carry out. Malware will be able to distribute itself and attack very quickly via vectors that will gain value for whatever the purpose of the program is, and at the same time elude forensic investigation for as long as possible. Speed also applies to the sophisticated automated discovery of novel vectors for vulnerability.

The half-life of usable information, i.e. the value of signals received and perceived in cyberspace, is going to decrease dramatically as large data handling becomes cheaper and a lot more sophisticated at scale. This is incredibly relevant for how governments approach security operations in cyberspace. Another concern is that stolen information, including complex social profiles and industrial information, is also going to be able to be utilised far more quickly by criminals and countries alike, including those countries with distinct ties to organised crime.


What would these developments mean for the efforts of cyber defenders?

Christopher Whyte: Because this conversation may sound alarmist, it is worth saying that the upgrade of the defensive paradigm is going to happen in tandem; this is an arms race of sorts. There is going to be this upgrading of the defensive side of the internet, which, as a lot of cybersecurity professionals might argue, has perpetually been a failing paradigm. The internet protocols were first developed in the 1960s to allow information transmission between computers. Security was not really an issue at that point. Then industries grew around network computer technologies, and by the 1980s going back and resetting the clock was not possible. This failing paradigm is going to benefit from AI patching, simulation and dissimulation tactics, active intelligence gathering aimed at minimising risk – these are all things that are going to improve with AI.

The trick is to recognise that we are talking about an increasingly adversarial learning environment. This is how deepfakes work. Generative adversarial networks use two algorithms in the program. The first one creates a deepfake, the second one says ‘I can detect that this is a deepfake and here is how I did that’, then the first one says ‘Here is one that will get around how you did that last time’, and so on. While offensive and defensive capabilities have always evolved in tandem, spurred by a need to balance against each other, now they are baked into the code from the ground up.


How is this fundamentally different? Why does it matter for engagement in cyberspace?

Christopher Whyte: When we are talking about fundamental changes in the syntactic, the code, we always have to consider the human element. Humans are the reason that information technologies are insecure, because we inevitably design them imperfectly. ‘Cyber’ as a prefix goes back to the cybernetic research of the post-war period. Everything ‘cyber’ thereafter implies the interaction of human and non-human systems. And so, cybersecurity as a concept conflates elements of human and societal security, institutional security, and the underlying information security precepts – the mathematics. With AI, we are talking in the same terms; separating the human from the technical is not easy to do. When training AI models, we are baking our biases and assumptions into the data. If AI models are then used to train further AI models, we could be perpetuating human biases in the system.

With AI as an element of interference operations or conflict, it is worth bearing in mind that data coming from two different societies is inevitably going to produce different assumptions within the code.

For instance, China’s digital apparatus comes from domestic experience more than the reaction to advances made by the Americans and the Russians in cyberspace. It goes as far back as the late 1990s, to Falun Gong practitioners organising protests on chatrooms, and in doing so completely operating beyond the party’s scope at the time for monitoring social dissent. Such incidents were realisation episodes. What came after – the upgrading of China’s digital censorship apparatus – had a dynamic focus on applying AI to everything from the implementation of their social credit system to facial recognition: features of their state apparatus that allow them to control their society internally.

This domestic experience affects how they are going to engage with competitors via cyberspace and how they are going to model the types of targets they might want to attack. There are many complex issues in understanding the subsequent strategic dynamics and how they will affect conflict and other political engagements between various blocks in the international community.


What does this technology mean for current cyber defence postures and strategy?

Christopher Whyte: One concern is how AI will improve mission-specific persistence – the ability of actors, and increasingly some less sophisticated actors, to focus on high-value targets via or in cyberspace. The result is that mission-specific defence will have to become more robust. Therefore, active defence is going to be far more attractive as a method for understanding the battle space. This means essentially offensive operations. How do we actually understand the battlespace as defenders? We have to go out and operate beyond our networks. The persistent engagement strategy bound up in the USA’s current cybersecurity force posture illustrates this well.

Persistent engagement can be seen as a sort of a strategy for minimising entropy: if cyber engagements are basically signals that give both sides information about what the other side is up to, persistent engagement is about reducing the uncertainty in those signals.

But what we are describing here is a natural upgrading of the dynamism of the cyber conflict landscape, as AI enables smarter engagement even to the point of non-engagement when it is not deemed beneficial. There is a perpetual incentive to avoid the types of engagement that persistent engagement is hoping to force: a condition where competition is essentially agreed upon between countries. AI creates incentives to never engage in those ways. Instead, it encourages behaviour aimed at either engaging in novel ways that amount to lateral – alternative or unexpected – engagement of competitors, or even poising the ruleset of the game being played.


Is AI relevant for the debate on internet fragmentation?

Christopher Whyte: Yes, I do think that AI could shape the future ‘splinternet’ development, which seems almost inevitable at this point. The current conflict – if it can be called that – between great powers on internet governance is something akin to the scramble for Africa. Governments take action to ensure their own material and tech resource supply chain, to secure their regional infrastructure, or to set rules of operation that might prohibit foreign companies from domestic operation. It is likely that AI regulation will increasingly become a part of this politicking. The way policy regimes often form, with like-minded blocs adopting similar approaches, will likely contribute to building a world where what AI looks like will diverge based on geographical location.

The likely story of AI influencing cyberspace is going to be one of affecting its information and user layers as opposed to the physical or logical layers, those bits of ‘cyberspace’ most closely synonymous with the internet. And so, AI could contribute quite significantly to web balkanisation or the degradation of the network’s social value vis-à-vis alternatives. It does not seem all that unlikely that new uncertainties and instabilities linked with AI might be one of the major prompts for divergent new web technology standards, for example blockchain alternatives, that might produce a truly divided set of global internets.


Can these new strategic dynamics be dealt with through international cooperation?

Christopher Whyte: Can international cooperation produce positive externalities in the development of AI? Yes, absolutely; with AI, there are opportunities that were overlooked or simply ignored in the case of the internet. The political case that the UN Charter values should be the ones guiding AI safety and ethics, and ultimately regulation, has garnered support, whereas such needs weren’t even recognised back when they might have been relevant for the internet.

We would like to avoid what seems to have happened in cybersecurity norm regimes; institutions and norms being developed and adopted by countries as a simple matter of political alignment. I would strongly argue that a technical basis for AI development should be the primary guiding principle for international cooperation on AI, provided that such cooperation is as universal as possible within the international community. That way, even if divergent systems develop, we will at least have a foundation for understanding such divergence.

Related article