Skip to main content

A scientist’s opinion : Interview with Henry Ajder about Deepfakes

The key point I would make about regulating deepfakes is that, in some cases, new laws may be needed, but much of the harm deepfakes cause is covered by existing laws, or could be covered by amending existing laws.

Deepfakes, a scientist’s opinion

Interview with Henry Ajder, Head of Communications and Research Analysis at Deeptrace.

Can you tell us a few things about yourself, your career and your position at Deeptrace?

Henry Ajder ESMH ScientistHenry Ajder: My academic background is in philosophy, which I studied at undergraduate level at the University of Warwick, and I hold an MPhil from the University of Cambridge. I specialised in the philosophy of perception and I have always been interested in the way we perceive and process information about the world. My career started with research into the ethical issues surrounding emerging technologies at Nesta, in particular artificial intelligence (AI), and has led me to my current position as Head of Communications and Research Analysis at Deeptrace. In this role I head up our stakeholder engagement work and lead our research mapping and analysis of the landscape of deepfakes and other forms of AI‑generated synthetic media.

Your recent report on the state of play on deepfakes mentioned that the majority (96 %) of all deepfake videos you found are pornographic. Can you tell us what categories of material you found (face swaps, I presume)? And did the victims of the revenge porn examples you found involve everyday citizens or individuals in the public eye?

Henry Ajder: The vast majority of the deepfakes we found online were face swaps (as the tools for creating these are most accessible), although there were some examples of synthetic voice audio and lip synchronisation that we also encountered. Again, the vast majority of deepfake pornography victims were high-profile celebrities (predominantly actresses and musicians), although we did come across several examples of private individuals being targeted. Some of the deepfake pornography forums and websites have pages dedicated to the form of deepfake pornography, which was a very disturbing finding.

Are you aware of steps to propose legislation to tackle deepfakes around the world? If so, what are your thoughts?

Henry Ajder: Several countries are looking into regulating deepfakes, with the US being the most active so far. Most action involves laws that prohibit the creation of certain kinds of deepfakes, or their creation within certain time windows (e.g. before an election), but others have focused on providing avenues for victims to seek justice when their image is used inappropriately. The key point I would make is that, in some cases, new laws may be needed, but much of the harm deepfakes cause is covered by existing laws, or could be covered by amending existing laws. Some of the discussion surrounding deepfakes has been quite sensational, and I think a more nuanced discussion is needed to inform the legal debate.

The Deeptrace report also mentioned online deepfake services. Are we seeing the birth of an entire industry creating fraudulent content? Where are these actors mostly located?

Henry Ajder: It is almost impossible to tell exactly where the actors behind these services are located, but we observed discussions held in English, Chinese and Korean on forums and in some services. The development and number of these services certainly indicated that a market is emerging, although many of the tools for creating deepfakes are open source and therefore free. However, these still require skill to operate effectively – opening up space for the service portals and marketplace services we observed.

You mentioned Euler Hermes in a case involving a financial scam using synthetic audio. Are you aware of any other attempts to obtain money?

Henry Ajder: It is important to state, as we do in the report, that no concrete evidence has been provided to show that the Euler Hermes case definitely involved synthetic voice audio, but the capabilities certainly exist (albeit in a far less accessible form than tools for creating face swaps). Another similar case where cybercriminals used synthetic voice audio to scam companies was reported by Symantec, but again no concrete evidence was provided. However, as the tools become more accessible and commodified, I think these kinds of attacks will become more prevalent.

What crimes or online harm can deepfake generate?

Henry Ajder: Many! As I’ve already mentioned, non-consensual deepfake pornography is the most established form of deepfake that almost exclusively targets women. Synthetic impersonation could also become a significant means of defrauding individuals and businesses but could also extend to video calls where live facial re-enactment could be used to create realistic avatars. Enhanced bullying, corporate reputation damage and compromising legal evidence are all threatened by deepfakes, which can manipulate the way we understand, react and view events or people.

Has your research shown that we are missing a thorough typology of synthetic media?

Henry Ajder: The definition of deepfakes is inherently difficult. Can Siri, Snapchat filters, computational photography on your iPhone, or FakeApp’s ‘de-aging’ feature be considered deepfakes? I typically refer to deepfakes as the malicious use of AI‑generated synthetic media, but this is by no means a universally adopted definition, and the term is used differently by many people. In terms of a gap, I think satire or parody (if done responsibly and with proper labelling) doesn’t fall under my previous definition of deepfake, but, again, existing forms of satire using synthetic media have almost exclusively been referred to as deepfakes. In this respect, your question points to how we might distinguish between positive/benign use of synthetic media/deepfakes, and malicious/negative use – which is difficult!

What is Deeptrace’s remit, ambition and future plans?

Henry Ajder: We describe what we are building at Deeptrace as an anti-virus for deepfakes, based on tools for monitoring and detecting deepfakes and other forms of manipulated media. Our technology works by detecting ‘digital fingerprints’ left behind on synthetically manipulated or generated photos and videos. This is achieved by training a deep‑learning network using large sets of deepfake data, where it learns to identify these fingerprints at pixel level. For this element, we do not use metadata or contextual analysis – purely digital forensics. As deepfakes continue to improve and the visual signs of manipulation begin to vanish, this analysis will become increasingly important for detecting deepfakes. As seen in our report, another ongoing research process we undertake at Deeptrace is the mapping of new tools and tech used by producers and attackers, and looking at where communities of malign actors are emerging and what their motives are. Just as with any dynamic between attackers and defenders, we need to vigilantly monitor how deepfakes and their uses are evolving, adapting our technologies accordingly. Our aim is to develop these tools to provide the most effective approach to protecting organisations, businesses and individuals from the malicious use of deepfakes.

Related Article

Leave a Reply