The promise and limitations of technological solutions to disinformation

How useful can machine-learning be in dealing with vectors of disinformation such as deep fakes or bots, and what are the implications of AI-powered fact-checking and deprioritising systems for media pluralism and freedom of expression?

Two new studies, launched by the European Science-Media Hub (ESMH) and the Panel for the Future of Science and Technology (STOA) at the European Parliament, unpack what has become an extremely complicated issue: disinformation.

Automated tackling of disinformation

The study ‘Automated tackling of disinformation’ by Alexandre Alaphilippe, of DesinfoLab, and Kalina Bontcheva, of University of Sheffield, provides a mapping of AI and other tech-based initiatives that have been launched across the globe aiming at spotting, debunking and countering dis/misinformation. Do these initiatives collaborate with each other?

Kalina Boncheva ESMH ScientistKalina Bontcheva, University of Sheffield : Collaborations are definitely starting to bear fruit. The First Draft initiative is a voluntary collaboration between companies, journalists, and scientists, which has produced many useful resources and insights. Around elections, fact checkers from different media often now work collaboratively to deliver better results. And regulators have now acknowledged that there is need for a coordinated approach.

It also highlights the need for digital platforms such as Facebook and Twitter to make their datasets more accessible to researchers so the impact of the dis/misinformation campaigns can be studied more effectively. It elaborates on recent remedial initiatives towards the right direction, such as Google’s Political Ads database, Twitter’s Ad Transparency Center, and Facebook’s Ad Archive, but it calls for more transparency.
The study shines a light on the different aspects of dis/misinformation, such as the financial motive of some of its producers, and issues that relate to users’ behaviour.

Institutions & media actors that should act to stop the spread of Fake NewsEurobarometer 2018 Survey
On the whole, the study provides a useful mapping of fact-checking initiatives (a list of initiatives is included in the study) deep fake and disinformation detection options, and bot spotting systems. It also illustrates how accessible and easy is the digital production of disinformation.

European projects to fight against Disinformation

CoInform EU initiative tool to fight disinformation
Comrades EU initiative tool to fight disinformation
Dante EU initiative tool to fight disinformation
Fandango EU initiative tool to fight disinformation
InVid EU initiative tool to fight disinformation
Pheme EU initiative tool to fight disinformation
Reveal EU initiative tool to fight disinformationSoBigData EU initiative tool to fight disinformationProvenance EU initiative tool to fight disinformationEunomia EU initiative tool to fight disinformationWeVerify EU initiative tool to fight disinformationSOMA EU initiative tool to fight disinformationSocialTruth EU initiative tool to fight disinformation

Check the annexes of the study for the lists of more than 60 initiatives in Europe against disinformation

The study anticipates deep fakes and synthetic media as a whole as the next threat to the information ecosystem and warns it can be even more challenging to tackle.

Kalina Bontcheva : Deepfakes are artificially generated images and videos. They can cause significant harm, as they look credible, they are harder for citizens to verify, and can give the impression, e.g. that a politician has said or done something that they did not. The pornography-oriented deep fakes e.g. are offensive to the target and can be used as part of online abuse campaigns, e.g. against journalists. The difference from text-based disinformation or memes is in the medium – deep fakes are artificial images and videos.

Regulating disinformation with artificial intelligence

The study “Regulating Disinformation with Artificial Intelligence (AI)” conducted by Dr Trisha Meyer of the Vrije Universiteit in Brussels and professor Chris Marsden, University of Sussex, looks into the implications of the use of AI in containing the threat of disinformation for freedom of expression, pluralism, and the functioning of a democracy. It examines the application of automated content recognition technologies (ACR) textual and audio-visual programmes trained to spot bots or potential disinformation, and highlights the need for the inclusion of a human agent in the process to appeal any removal decisions. It also examines the trade-offs of other potential remedies to information manipulation, such as the blocking or deprioritisation of content.

Crucially, the report recommends a distinction between public, private, electoral, and foreign disinformation to enable a more effective regulatory approach. Adding to this, the report points to the regulatory gaps, such as the one between time-limited electoral campaign regulations and advertising self-regulation, and delineates the different avenues of regulation in general (co-regulation, self-regulation, etc), clarifying its position that for the moment legislation would be premature and potentially hazardous for freedom of expression.

How does social media’s business model and the current remedies to disinformation campaigns impact on media pluralism?

Trisha Meyer ESMH scientistTrisha Meyer and Chris Marsden : All forms of content moderation mentioned in the study (filtering of content, blocking of content, deprioritisation of content, disabling and suspension of accounts) can potentially affect freedom of expression and media pluralism, if there are no safeguards in place to protect from over-censoring. We advise against increased use of AI for content moderation purposes, without strong, independent, fully funded and externally audited human review and appeal processes.

The recent EU initiatives against disinformation, as listed in the study, demonstrate that Europe is poised to play a crucial role in future regulatory approaches to the problem.

At the same time, the study stresses the need for holistic approaches that factor in an evolving media ecosystem, and safeguard media pluralism as well. It brings forward the suggestion of the UN Special Rapporteur for the promotion and protection of freedom of opinion and expression, David Kaye, for a “social media council”. It calls for thorough impact assessments any tech-based solutions, to ensure the aforementioned rights are not impinged upon.

chris marsden ESMH scientistTrisha Meyer and Chris Marsden : An assessment of which/how human rights will be impacted when tackling disinformation is absolutely necessary. Automated technologies are limited in their accuracy, especially for expression where cultural or contextual cues are necessary. Pushing this difficult judgement exercise in disinformation onto AI and online intermediaries is dangerous, as we are allowing machines and private actors to decide what is (un)desirable and (il)legal speech.

Both studies aim to bring more clarity to the complex puzzle of disinformation and highlight the urgency, and need, for a multi-stakeholder approach, between sectors and initiatives.

Useful links
Article by Euronews “How can Europe tackle fake news in the digital age?” linked to the presentation of the studies to the STOA panel
How can Europe tackle fake news in the digital age

Related Content:
A scientist’s opinion : Interview with Kalina Bontcheva about Technology & Dis/misinformation
A scientist’s opinion : Interview with Dr Trisha Meyer & Prof Chris Marsden about Technology & Dis/misinformation
STOA study : Regulating disinformation with AI
STOA study : Automated tackling of disinformation