Social media platforms have taken a leading role in our everyday lives and have changed the way we obtain health information online. The most recent topic fueling disinformation is the novel Coronavirus. However, it is not the only one.
Social media platforms are used by one-third of the world population, they are changing how we find partners, access news and engage with politics. But the rise in disinformation and data privacy breaches are denting public trust. Is it too late to harness the potential of social media for good?
Despite alarmist news stories about deepfakes heralding the end of democracy or truth itself, the technology – for better or worse – is far from perfect, which suggests that there is still a window of opportunity to prepare society, institutions and regulatory frameworks for the moment it is.
Inaccuracy or intentional manipulation: the circulation of false information has become one of the leading problems we are facing in the digital environment. Now watchdogs are fighting back with a range of solutions.
As has been the case for many recent national elections, the European Parliament elections are also facing the challenges of online disinformation. The European Commission is making an extra effort to push giant tech companies to implement a voluntary code of conduct against disinformation. European researchers are preparing the first studies on the topic.
How useful can machine-learning be in dealing with vectors of disinformation such as deep fakes or bots, and what are the implications of AI-powered fact-checking and deprioritising systems for media pluralism and freedom of expression?
Scott Brennen from the Oxford Martin Programme on Misinformation, Science, and Media investigates how changing media structures and technologies are shaping the scientific information and scientific misinformation.