Interview with Henry Ajder, speaker and advisor helping organizations to navigate deepfakes, synthetic media, and the evolving AI ecosystem.
In 2019, you released a report which found that 96 % of all deepfakes produced depict some kind of image abuse. What has changed since then?
Henry Ajder: Back then, no one really knew what the deepfake landscape looked like. Our report, The State of Deepfakes, was the first research that mapped it comprehensively. One of the most shocking findings was that 96 % of the deepfake videos we identified were explicit in nature. They almost exclusively targeted women, so we found it was a highly gendered problem, and that’s something that remains to this day. If we fast forward to 2021, the landscape looks the same and different at the same time. It is different because deepfakes have become increasingly commoditised; that is, tools for creating novelty face swaps, fun and meaningful content have really exploded. But the malicious-use landscape has not significantly changed in terms of who is primarily being victimised. It is still almost entirely women, and non-consensual image abuse remains the vast majority of malicious uses of synthetic media, again in the 90-95 % region. The ease with which they can now be created has changed the scale of the problem, moving from celebrities as the primary target to private individuals, as everyday people get access to the technology.
Easily accessible apps and freely available code bases are proliferating on the web. Should we ban deepfake technology or have we missed the boat?
Henry Ajder: In a sense, malicious deepfake tools are like Pandora’s box: once these tools are out there, it is nearly impossible to stop their proliferation. If you take one website or tool down, another will spring up. The base code for these tools is also readily available in many repositories and libraries, and could be easily replicated from the ground up. These tools are also typically perversions of algorithms or models that have been designed for other purposes, which are perfectly benign or interesting. For example, somebody creates an algorithm for generating AI art or fun videos, and a bad actor with malicious intent repurposes it for misuse. Banning any possible technology that could be used to create deepfakes is not the right answer. There are many uses for these technologies, which are creatively and commercially interesting, which we don’t want to entirely ban because there is a chance that they could be misused.
What can be done from a legal perspective to curb the misuse?
Henry Ajder: From a legal perspective, it needs to be made explicitly clear that this kind of activity, whether it’s creating to share or sharing this content is akin to committing a form of sexual harassment or assault. The legal approaches we are seeing in the US, where certain states are criminalising deepfake intimate image abuse, are important. As opposed to civil lawsuits where you would have to file a suit as a private individual, the state would prosecute offenders. That is a key step. We are also seeing similar approaches being discussed in the UK, South Korea and Japan. The legal side of this issue is a key part of the puzzle, but given that the internet is used by people around the world who often maintain anonymity, it is by no means a silver bullet for addressing the problem.
So what can we do to stop the spread of malicious deepfake technologies?
Henry Ajder: The key point here is creating friction. We need to make sure app stores, vendors and platforms act to remove malicious apps and tools, especially when they are disguised as something else, while still giving people the capabilities to make this kind of content. It’s essential to make it as difficult to access them as possible. Internet service providers could also help by potentially blocking known services that are accessed outside of centralised services. By pushing malicious tools underground, likely to places such as the dark web where we are already seeing malicious deepfake activity, these tools would be much more difficult to access for the vast majority of people. It may sound bleak that we can’t entirely stop people accessing these services, but making them as hard to find and use as possible, combined with strong legal action and resources for prosecuting offenders, is likely the best approach.
Speaking of the population, how could we stop the creation of non-consensual deepfake footage in the first place?
Henry Ajder: We could run public awareness campaigns like the ones that have been created around revenge pornography and sexual harassment, making it clear that creating or sharing these images and videos without consent is a criminal act. But this needs to be done carefully as creating greater public awareness can be a double-edged sword: by making people more aware of technology for creating malicious deepfakes, we may unintentionally drive more users to the technology who weren’t previously aware of it.