Despite alarmist news stories about deepfakes heralding the end of democracy or truth itself, the technology – for better or worse – is far from perfect, which suggests that there is still a window of opportunity to prepare society, institutions and regulatory frameworks for the moment it is.
Inaccuracy or intentional manipulation: the circulation of false information has become one of the leading problems we are facing in the digital environment. Now watchdogs are fighting back with a range of solutions.
As has been the case for many recent national elections, the European Parliament elections are also facing the challenges of online disinformation. The European Commission is making an extra effort to push giant tech companies to implement a voluntary code of conduct against disinformation. European researchers are preparing the first studies on the topic.
How useful can machine-learning be in dealing with vectors of disinformation such as deep fakes or bots, and what are the implications of AI-powered fact-checking and deprioritising systems for media pluralism and freedom of expression?
Scott Brennen from the Oxford Martin Programme on Misinformation, Science, and Media investigates how changing media structures and technologies are shaping the scientific information and scientific misinformation.