A scientist’s opinion : Interview with Dr Trisha Meyer & Prof Chris Marsden about Technology & Disinformation

Interview with Dr Trisha Meyer, Vrije Universiteit in Brussels & Prof Chris Marsden, University of Sussex, about Technology & Disinformation.

“In struggling to find a sustainable business model for the online environment, many traditional media providers have become almost exclusively reliant on advertising.”


You mention human involvement in the application of AI systems is important to ensure right of appeal. Is there research done on the subject of AI vs hybrid moderation?

Trisha Meyer & Chris Marsden: There is a broad range of literature pertaining to the potentials and pitfalls of regulating policy problems through use of technology generally. Relevant to your question, there are reports on the limited accuracy of current AI methods and the need to safeguard human rights. Furthermore, transparency is not a solution to the potential legal problems of AI.


You make lots of useful typological distinctions like, differentiating between public, private, electoral, and foreign disinformation, or the nuances of co-regulation and self-regulation.

Do you think a confusion over the terms is one of the reasons this problem is so complicated? Why is it important to make these distinctions? Do you think different agencies should be dealing with different categories of disinformation campaigns?

Trisha Meyer & Chris Marsden: Both the medium (public platform or private conversation) and the context (domestic or foreign) of dissemination matter, as this means different rules apply. For instance the right to reply in case of inaccuracy is standard in codes of conduct for traditional media providers, and foreign interference is often prohibited in domestic election campaigns. Although disinformation can be tackled through generic measures such as media literacy, specific measures may be necessary depending on the type of disinformation at hand, such as ensuring that electoral laws are valid in the online context.


You mention that a policy response should be informed by a study in the history of mass media as well.

Do you think legacy media fully understand or acknowledge their role in the current, vulnerable to disinformation, ecosystem? How do you see that role?

Trisha Meyer & Chris Marsden: It is important to understand the changes that the traditional media sector is undergoing. In struggling to find a sustainable business model for the online environment, many traditional media providers have become almost exclusively reliant on advertising. Online advertising encourages use of ‘clickbait’ techniques. Current initiatives targeted at the media sector seek to raise awareness and trust in traditional journalistic standards, as well as provide services for fact-checking news. It should be clearly noted that the term ‘fake news’ is used by those who wish to discredit critical voices.


Do you think regulation aiming to tackle disinformation has to revisit the electoral law (especially when dealing with electoral disinformation)?

Trisha Meyer & Chris Marsden: Yes, in countries where electoral law has not been updated to fit an online context, this may be necessary. We share the widely expressed view that online media needs regulating using similar techniques and accountability as offline.


Your report draws on the UN Rapporteur’s for freedom of expression assessment that there is a human right component in any assessment of deploying AI in the battle against disinformation.

Do you think we should make it a crucial part of the ongoing debate? What are the dangers if we don’t?

Trisha Meyer & Chris Marsden: An assessment of which/how human rights will be impacted when tackling disinformation is absolutely necessary. Automated technologies are limited in their accuracy, especially for expression where cultural or contextual cues are necessary. Pushing this difficult judgment exercise in disinformation onto AI and online intermediaries is dangerous, as we are allowing machines and private actors to decide what is (un)desirable and (il)legal speech.


If the solution is co-regulation or statutory regulation, who’s going to take the lead?

Chris Marsden, the co-author of this study, has suggested the formulation of a new ‘OffData’ regulator merging the powers of electoral commission, data protection authority, the advertising regulator and the communications regulator. We suggest more research into the implementation of regulation in this area.


Is the European Union playing a leading role in tackling disinformation? In what way? Is there appetite for a union-wide approach?

Trisha Meyer & Chris Marsden: The recently concluded EU Code of Practice on Disinformation for online platforms and advertisers did not result in additional commitments, oversight or conversion in current approaches. At the same time, the process has raised awareness that the multi-faceted problem requires a multi-faceted solution. All proposed policy solutions stress the importance of literacy and cybersecurity. Holistic approaches point to challenges within the changing media ecosystem and stress the need to address media pluralism as well. Further, in light of the European elections in May 2019, attention has focused on strategic communication and political advertising practices. The HLEG report and EC elections package illustrate how the problem cannot be tackled in isolation, nor by regulatory actors alone.


How does social media’s business model and the current remedies to disinformation campaigns impact on media pluralism?

Trisha Meyer & Chris Marsden: All forms of content moderation mentioned in the study (filtering of content, blocking of content, deprioritisation of content, disabling and suspension of accounts) can potentially affect freedom of expression and media pluralism, if there are no safeguards in place to protect from over-censoring. We advise against increased use of AI for content moderation purposes, without strong, independent, fully funded and externally audited human review and appeal processes.

Related Article

Leave a Reply