Joana Gonçalves de Sá is an Invited Associate Professor at the Physics Department of Instituto Superior Técnico, Universidade de Lisboa, and was the recipient of an ERC Starting Grant to study human behavior using the online spread of ’fake news’ as a model system.
Disinformation and misinformation are not a new problem, so why did the COVID-19 pandemic make it more evident?
Disinformation and misinformation have probably been around for as long as humans have been communicating. Recently, social media has made the problem far worse: information travels faster, and is not often filtered. By word of mouth you could only reach a limited number of people. Printed media could reach more, but only had a few providers of information — or misinformation! But now with social media, everyone can be an amplifier. COVID-19 also made the misinformation that was already circulating on science, health and nutrition more visible.
Do you think that one can fight misinformation using the same social media channels that are used for their spread?
A 2018 study [Vosoughi at al. (2018) Science] showed that [political] misinformation spreads faster, further, deeper, and reaches a lot more people in a shorter period of time than ‘real’ news. So, it’s easier to spread false information than true information. The interesting question from my perspective is: Why?
The other very interesting fact is that automated robot accounts spread false and real news at the same rate. These bots repost content, but they do not select between true and false. But humans do! The reason why misinformation spreads more efficiently than true information is because some humans have a preference for misinformation. Why do we prefer false news? We already have some hypotheses and they are related to human biases.
Can you tell me more about the hypotheses you have established for your project: FARE, “Fake News and Real People – Using big data to understand human behaviour”?
Most of our hypotheses — or indeed, all of them — come from behavioural and cognitive psychology. One comes from confirmation theory: you are more likely to share things that confirm what you already think. Another comes from the Dunning-Kruger effect, in which people with lower levels of knowledge tend to overestimate how much they know. What we have seen is that confidence grows faster than knowledge, except for people who know very little and people who know a lot — these people don’t overestimate their knowledge. But the people in the middle do tend to overestimate their knowledge.
This is what we find amongst the anti-vaxxers: they’ve read and they’ve thought about it but they are far from being experts, although they rate themselves as such or higher. We think that the people who are more susceptible to false news are the ones who strongly overestimate how much they know about a subject.
Another hypothesis comes from group bias: people tend to believe more in people from their close group of friends than in experts.
What can you tell us about the results data from your project researching disinformation so far?
This project is not so much about false news, but it’s really about using them as a model system, like biologists use mice or flies. COVID-19 is particularly interesting because a year ago the general population had no opinion on masks or lockdowns. There were no priors, thus there was no confirmation bias. And then, suddenly people polarized. Everyone knew what was going on and built a strong opinion about what should be done. It’s a really interesting model system: How do you polarize? How do you make up your mind, even though you have no idea what you’re talking about?
Do you think your project and big data will help us find a solution to fight disinformation and misinformation?
Yes, through ‘vaccination’ or ‘inoculation’. And there are some ideas on how we can do that. Facebook is already tagging what’s not real news, but only for a tiny percentage of misinformation. They put a filter over the news saying something like “this has been confirmed to be false” and downgrade the probability that you’re going to see it on your newsfeed. There are places where you are requested to answer some questions before you share or comment or you have to prove that you have read the news article, which is an interesting filter for boredom in general. You can also have a delay, like “the post will only be shared in eight hours.” You’ll never end misinformation, but there are things you can do to mitigate it. These are some ideas but none of them is going to be a silver bullet.
We think about [false news] the same way we think about a disease caused by a pathogen. You can have an environment where the pathogen can spread, but if you have no susceptible individuals everything is fine. We can work on the susceptibility of the individuals, we can work on the environment or we can work on the pathogen [false news]. The people to ‘inoculate’ before are either those who are very susceptible [to disinformation and misinformation] or those who live in an environment that makes them more prone to be exposed. This is exactly what we are doing to stop the spread of COVID-19.