Henry Ajder on generative AI: ‘We need a balance between excitement and supervision’

Henry Ajder is a renowned expert on generative artificial intelligence (AI) and deepfakes. He advises companies and governments including Adobe, Meta and the European Commission on the impact of AI on business and society. Ajder also hosted “The Future Will Be Synthesised“, a BBC radio series on deepfakes and synthetic media.


A German photographer has just refused the Sony World Photo Award he won because he generated his winning photo using AI. It was a test to see if competitions were ready for AI. His takeaway: “They are not”. Is he right?

Henry Ajder profileHenry Ajder: Absolutely. This is a perfect “microcosm” example of the disruptive impact of the generative AI paradigm shift. Creative competitions like this are simply unprepared to deal with, and reliably detect, AI-generated content.

While this is a pretty harmless example, there are many others that are much more serious, like cloning voices to commit fraud, or generating fake videos that may end up in courtrooms as digital evidence –


What makes this new generation of AI, like DALL-E and ChatGPT, so special?

Henry Ajder: The leaps in quality and the variety of AI generated media are staggering. Software can now accurately clone somebodys voice based on a few minutes of recordings of the original voice. Image generators like DALL-E2 or Midjourney are equally able to generate hyper-realistic, manipulated photos in seconds.

AI has also become extremely efficient and accessible. A decade ago, creating Hollywood-style visual effects took a room full of experts and expensive equipment. Nowadays, anyone with a gaming computer and the right tools can do the same. The generative AI revolution is as much about everyday people getting first hand access to tools like ChatGPT as it is about the technological improvements.


How does an AI-based chatbot like ChatGPT work?

Henry Ajder: Its underlying engine is a large language model that has been trained on huge amounts of text data that has been scraped from the web. By training on those data, the model learns what the statistically most probable word is to come next. It learns the many relations between terms and phrases, which previously wasn’t possible. This is the key feature that helps them generate the nuanced text you will see in its output.

So the response you get from ChatGPT is essentially what it believes to be the most probable response to your prompt. You could call that a fancy AI-based version of autocomplete.


What do text-generators like ChatGPT struggle with?

Henry Ajder: They make mistakes quite frequently. The problem is that they make these mistakes in an authoritative way, like a cocky undergraduate would. That has already led to big embarrassments, in particular for journalists who used ChatGPT to write articles that then included wrong statements.

Especially in their early days, common sense and numerical reasoning were not a strong suit of generative AIs. For example, I asked ChatGPT what weighs more, one kilo of feathers or one kilo of steel, and it wrote a long answer about why the steel weighs more – which is clearly nonsense.


And where do they excel?

Henry Ajder: I like to use them as a creative partner to bounce ideas off. You can ask it to make a given passage sound less clunky or better adapt it to a particular audience. It can help you write articles, homework and essays. It can even generate convincing poetry. But you have to bear in mind to critically review its output before using it. So it is best to see it as a co-pilot or collaborator.


Some studies show that ChatGPT can for example pass radiology board exams. Isn’t that a bit more than just a co-pilot?

Henry Ajder: The question is not whether AI could be more than a co-pilot, but whether it should! We’re already seeing radiology departments piloting AI to analyse scans for known diseases, but always with human oversight somewhere in the loop. Could they replace a human radiologist? Yes, but the question is how well. And whether the general public and regulators would embrace it.


Which aspects of a doctor’s or radiologist’s work could AI not replace?

Henry Ajder: One area where AI really stands out is computer vision. This includes recognising and classifying elements like whether a scan shows a malignant or benign tumor. What AI currently cannot do, is address the human elements. A doctor might say, “I see this patient is really scared, so I need to make sure that I give her this result in a careful way”.

Through experience, good doctors learn how patients react and they know how to put findings in context. So we require both. In fact, in the future, I predict we’ll see the best diagnostic results and patient experiences when doctors and AI work together.


So doctors’ and radiologists’ jobs may be not be in immediate danger. But the BBC wrote in March 2023 that AI could make 300 million jobs obsolete.

Henry Ajder: That depends on the jobs. I would be careful with these massive claims. This new generation of AI certainly threatens more white-collar jobs than ever before, particularly artists, text-based knowledge workers and even software developers. But can it do these tasks fully unsupervised? Not reliably at the moment. These AI models still make mistakes, and some sectors like finances and insurance do not have a high risk tolerance. Some jobs will inevitably be cut, while others will in fact be augmented or created by AI.


Some of the media’s sensational coverage of AI is based on controversial studies, such as the Oxford study from 2013. Its authors wrote that within two decades, 47 percent of US employment could be obsolete. Ten years later, nothing has happened. Are these “crystal-ball forecasts”?

Henry Ajder: This is indeed a bit of a problem. AI technologies have always been covered in a sensationalist way, with hype-driven headlines being the norm. The studies’ predictions may ultimately be wrong, but there is value in the AI-related questions they raise: we need to address these questions now. For example, how will EU member state governments deal with rising unemployment numbers in certain sectors?


Jurisdiction and politics are notoriously lagging behind when it comes to regulating technology. What makes you think the EU has the expertise required to make solid decisions in this complex field?

Henry Ajder: Yes, we have already seen that lag with Uber, electric scooters and just about every other disruptive technology. It’s true many politicians aren’t very AI literate, but the EU does have key politicians who are up to speed and the forthcoming EU AI Act is a major step to catch up. It will cover elements of AI from training data and data privacy issues to AI bias and watermarking AI generated content. The act will also classify some forms of AI as “high risk”, which will require further safety auditing and commitments from developers.


That sounds like a trade-off between achieving high-quality AI and deterring tech companies from building software in Europe.

Henry Ajder: This indeed needs to be well balanced. Regulating AI is essential, as the field currently is highly experimental and unregulated, a bit like the Wild West. But if the hurdles become too high or unreasonable for technology founders, they will move to the US, the UK or other countries where they have more investment opportunities and less restrictions to build their software. We don’t want to demonise all AI, which has many amazing uses, but encourage responsible development.


What specifically needs to be regulated?

Henry Ajder: For one, we need to address the right to know, depending on the context. If you go to a movie theatre, you are usually aware that part of the movie is likely synthetic. But now this technology is bleeding out into everyday life, so regulations need to address where exactly we need to know when AI is involved in the decision-making process or is generating the content we are interacting with.

In creative arts, that may not be necessary. But in other contexts we should know whether AI was involved in the decision-making or creation of content.


But most AI-based products are not transparent. They are black boxes developed by Silicon Valley-based developers. How can I trust a piece of software that nobody but its producer knows how it works?

Henry Ajder: It’s problematic if, for example, AI-based software produces the right result 90 percent of the time but you cannot say why. It may be doing that for all the wrong reasons – which becomes clear only in the future as it ingests more data or generates more results.  This could be due to training data that contain implicit biases or human training interventions. The EU AI Act goes some of the way to addressing this challenge, but there’s still a long way to before we can understand how all AI reaches decisions.


Until then, how do we use it best?

Henry Ajder: If we use these systems at scale, the buck still has to stop with a human. A human ultimately has to take responsibility for every decision AI makes.

What we need right now is a balance between being excited about the possibilities generative AI offers, and the realisation that this new generation of AI is by no means perfect and needs careful supervision.