A scientist’s opinion : Interview with Denis Teyssou about Deepfakes

Synthetic audio means that it will be possible to make any person say anything, making it even easier to make politicians or celebrities victims of deepfake as there is a lot of publicly accessible audio of their voices.

Deepfakes, a scientist’s opinion

Interview with Denis Teyssou, head of AFP Medialab R&D.


Can you tell us a few things about your career to date, your role at InVID and how you ended up working with deepfakes?

Denis Teyssou ESMH ScientistI’ve been managing AFP Medialab for the last 12 years. I’m a journalist with a background in informatics and innovation management. I am currently the innovation manager at InVID and WeVerify and I created the InVID verification plug-in. I also manage the dissemination and use of the work package (WP) at InVID and the user evaluation at WeVerify. The main video verification technique we use is to trigger a reverse image search on keyframes, which is what we have partially automated in InVID, and this technique applies to deepfake detection. We have showcased this on the Commission’s Futurium page and I have also written on the topic.


Can you tell us about InVID technologies (your suite of tools)?

InVID was a Horizon 2020 (H2020)-funded project composed of a consortium of nine partners. It was led by the Information Technologies Institute/Centre for Research and Technology Hellas in Thessaloniki, Greece. InVID has developed several tools for video verification, including a verification plug‑in and a web app.


How many and which media companies have reached out to you to use your products?

We have thousands of users for both the verification plug-in and the verification web app. The plug-in has now more than 15 000 users (almost 6 000 active users per month) and is being used by the New York Times, BCC, France24, Deutsche Welle, ARD, ZDF, RTVE, France Info, fact-checking operations around the world, Amnesty International, the Office of the UN High Commissioner for Human Rights and many others.


How elaborate, accessible or expensive are computer apps for creating deepfakes?

There are several open source apps available and most can run on an average computer with a good graphics card. Technology is evolving rapidly, with less data, less time and less computer power needed to achieve a realistic result. The Chinese app Zao is a good example.


What are the challenges of debunking deepfake and authenticating original material?

There are several challenges. Synthetic audio means that it will be possible to make any person say anything, making it even easier to make politicians or celebrities victims of deepfake as there is a lot of publicly accessible audio of their voices. As the technology is evolving rapidly, any detector, particularly based on machine learning, will have to be updated frequently in order to detect new threats. It will be a non-stop game of cat and mouse. Some scientists talk of an unwinnable race.


Does InVID receive funding from private companies? And how easy is it for technology initiatives to secure public funding rather than go to the private sector?

InVID was an H2020 project that ended in December 2018. InVID and WeVerify are innovation actions funded by the European Commission. The verification plug-in wraps several open source tools and was released as a container under a MIT licence – it’s an open source intelligence tool. It uses the data of the main reverse image search engines. It was launched as a free prototype and as a public good to help fight disinformation. Some of our services could be blacklisted but, politically, it would probably not be a good idea as the platforms have committed to the EU code of conduct on disinformation.


Have you been approached or have you yourself approached political actors/parties to use your tools? If so, in what circumstances?

All the main operators – such as CrossCheck in France for the 2017 presidential election – that wanted to fact-check election campaigns and expose information disorders, have been using our plug‑in since July 2017. At each election, we see a rise in users in the relevant countries.


Apart from your own verification tools, what steps would you like to see taken to combat deepfakes?

I think we should ‘keep calm and carry on’ regarding deepfakes There is too much hype and confusion on this topic. Even the doctored video of Nancy Pelosi, which was initially referred to as a deepfake, was just a slowed down version of the original and was processed with traditional editing software. A similarity search index of visual content on social media platforms (Facebook, Instagram, Twitter, etc) would help in finding the original content behind deepfake videos.

Related Article

Leave a Reply