Technology & Disinformation : a scientist’s opinion
Interview with Kalina Bontcheva, University of Sheffield.
Can you explain the implications of deep fakes and synthetic media disinformation more broadly? In what ways is it different to more text-based output?
Deepfakes are artificially generated images and videos. They can cause significant harm, as they look credible, they are harder for citizens to verify, and can give the impression, e.g. that a politician has said or done something that they did not. The pornography-oriented deep fakes are also offensive to the target and can be used as part of online abuse campaigns, e.g. against journalists. The difference from text-based misinformation or memes is in the medium – deep fakes are artificial images and videos.
You also mentioned the need for platforms like Twitter and Facebook to make their datasets more available, as well as storying historic data.
Do you think that means we need more movement in terms of data protection law across the board in general? Can you speak a little about why data is important?
Yes, there is a strong need for data protection laws that can adequately address the challenges of user data being misused and harvested without consent by third-party applications, political parties/politicians, researchers, and others. The questioning of the UK Information Commissioner by the UK DCMS inquiry into fake news has a lot of interesting points with respect to privacy and data protection. Specifically about the data sets – making them accessible to researchers does not mean they need to be made public and many of the social platforms are already sharing selected data with selected labs. The argument here is that it needs to be more open to all researchers, but also journalists. The High-Level Expert Group report made the more general point about having European research institutes that enable this to happen. Having historical data is critically important, including misinformation and accounts that have been removed by the platforms, as they enable researchers and journalists to really establish what misinformation there is, how it spread, who spread it, and potentially why. It’s the same reason that we keep old newspapers and other material of historical importance in libraries. We are not advocating to make all data available, but around key events, e.g. elections or on key controversial topics, e.g. vaccines, abortion.
Can you explain what information laundering means?
It basically means taking information from one site, then reposting it on another, possibly with some modification, but without crediting the original source. It’s used to give the impression that multiple sites are reporting on the same (false) story and thus give it credibility.
Can you talk a bit about the monetary incentives of fakes news, how and who can address them?
Generally, fake news sites generate revenue from hosting ads. De-monetising them is most likely to succeed through changing the algorithms of Google and social platforms to not promote such sites and clickbait adverts, as well as through better advertising standards.
There is a breadth of different initiatives in your study. Do you see collaborations bearing fruits already?
Yes, collaborations are definitely starting to bear fruits. The First Draft initiative is a voluntary collaboration between companies, journalists, and scientists, which has produced many useful resources and insights. Around elections, fact checkers from different media often now work collaboratively to deliver better results. And regulators have now acknowledged that there is need for a coordinated approach.