The Provenance research consortium has seven partners. What role do the different partners play?
Jane Suiter: Countering disinformation requires cooperation from different disciplines and areas of expertise. To answer this complex challenge, Provenance research consortium is indeed composed of seven partners, each one playing a key role in the project’s development and implementation.
The Dublin City University Institute for Future Media, Democracy and Society coordinates Provenance, ensuring that the understanding of disinformation and digital media within the project reflects best practice and is focused on the right areas.
As the project is specifically concerned with empowering and upskilling ordinary citizens, Fundación Cibervoluntarios provides expertise on digital empowerment and digital inclusion. This is particularly important as those who lack digital skills may be more vulnerable to disinformation. Graz University of Technology takes the lead in evaluating the Provenance solution to make sure it is effective in helping citizens evaluate content.
On the technology side, the ADAPT Centre in Trinity College Dublin are developing the text analytics that will evaluate or verify text-based content. They also lead the personalisation component, which translates the analytics into an accessible summary for end users.
The Czech Academy of Sciences are developing tools for visual analytics with a specific focus on the problem of identifying deepfakes. In addition to content, Newswhip also provides analysis of the flow of online information. The Irish SME has developed award-winning technologies for rapid content discovery and virality predictions on social media. Finally, Everis is a large enterprise specialising in blockchain solutions, where records of verification actions are stored in a secure ledger.
Two solutions for digital content verification have been conceived to help two different actors counter disinformation: the plug-in/personalised digitalised companion for the general public and the repository for the content creators. What is the primary focus of the project?
Eileen Culloty: The primary focus of the project is helping people evaluate online content. We investigated the content creators’ solution, but there are too many complications arising from copyright and related issues. Addressing those is not feasible for our project. However, when we talk about addressing disinformation, we are also talking about helping people recognise high-quality or reliable content. Ultimately, in the long term, we ask: What if interventions like this can change the dynamics of social sharing by encouraging engagement with high-quality content?
The dynamics of social media prioritise instant reactions and that is one reason why hoaxes and rumours spread so quickly. Our idea is to insert some friction into that response – a moment to pause and think about the credibility of content. Research on disinformation is new and there is a lot we do not know, but evidence from current research supports the Provenance approach. For example, people seem to be bad at spotting disinformation in ordinary circumstances, but when they are asked to stop and think, they are actually quite good at it. This is the basic idea of Provenance.
How will the plug-in work in practice and how can users benefit from it?
Eileen Culloty: The browser plug-in provides context for individual pieces of content: Where does it come from? Is it written in an emotionally manipulative way? Have the images been doctored?
First, the system gives a warning – if there is one. End users can then click on the Provenance icon to access a summary of what we know about that piece of content. This summary is visual and colour-coded to make it quick and easy to evaluate.
Then, this summary information is divided into categories: general information, source information, and information about language and visuals. Finally, the end user can click on the icons to find out why we have attributed the warning. This explanation is designed to support learning about media literacy.
Why is the final ‘network building’ objective important in the context of the project?
Eileen Culloty: We began the project with five key target groups in mind: civil society, content creators, policymakers, researchers, and potential investors or collaborators. Building a network like that is crucial because no one project is going to solve the disinformation problem. It requires ongoing cooperation among different stakeholders and a good understanding of the needs and perspectives of those stakeholders. As Provenance is a citizen-first project, we have developed a close relationship with media literacy practitioners, as these practitioners are on the frontline of upskilling citizens and closing digital divides. Policymakers also play a crucial role in setting the agenda for addressing disinformation. For example, we have given presentations on Provenance to the Organisation for Economic Co-operation and Development high-level risk forum, the UN’s International Organization for Migration, and the European Regulators Group for Audiovisual Media Services, among others. Many of these events are cross-sectoral by nature – they bring together researchers, fact-checkers, policymakers, and even digital platforms. Building a diverse network is also really important to understand what the steps are beyond the end of Provenance and what collaborations might take shape.
Are you working in collaboration with other similar projects (see: EU-funded projects to tackle disinformation)?
Eileen Culloty: We have collaborated with the coordination and support action Social Observatory for Disinformation and Social Media Analysis (SOMA) by contributing to their investigations on the Truly Media verification platform. For example, recent reports examined false claims about COVID-19 in different countries. We also invited SOMA to present/introduce the platform to our JOLT network of journalism practitioners and researchers in 2019. They tested the platform and gave feedback on what practitioners need. We also hope to work with the EUNOMIA project to test our tools in their decentralised environment and we have submitted a new project bid with researchers from WeVerify.
The Commission published the Digital Services Act (DSA) proposal on 15 December 2020 and in the coming months, Parliament and the Member States will discuss the proposal in the ordinary legislative procedure. What do you think about the current draft? Could it be effective in changing the current landscape (in particular, concerning access to data)?
Jane Suiter: When it comes to disinformation, the DSA is best viewed alongside the Media Action Plan and the Democracy Action Plan. The move towards co-regulation within the overall vision of the EU’s digital agenda is a positive development. It is also really positive to see the emphasis on supporting news media and media freedom, because developing tools to counter disinformation will mean little unless there is an equal effort to revitalise the media system and democracy.
Eileen Culloty: Access to data remains a key issue because there is still so much that we do not know about disinformation. Currently, independent researchers and policymakers are unable to determine the true scale and impact of online disinformation. That is a real problem if we want to ensure that our countermeasures are focused on the right areas. While the DSA does not directly address that issue, the newly created European Democracy Media Observatory (EDMO) sounds really promising. One of EDMO’s objectives is to access platforms’ data for research purposes. That will certainly play a major role in future research.