What prompted you to conduct a study on the effects of COVID-19-related false news?
There has been a lot of research that addresses the terrible consequences of false news and how it is harming our health and political systems. We were wondering: what are the actual effects?
So we started looking into that question and found that the existing literature is mostly citing somebody else’s opinion, like a chain of citations. Most of the studies assess people’s belief in false stories, which is important, as is understanding why and how false news spreads. But there is this implicit assumption that if somebody shares false information, they will also act on it.
So the number of shares is not directly indicative of the effects?
People share content on social media for all kinds of reasons, including satire, flagging it for other people and critical comments. There is evidence suggesting that a lot of the time when people share misinformation, it is not because they think it’s true but because they point out the mistakes and make fun of it.
A recent example of health misinformation is Nicki Minaj’s tweet claiming that her cousin’s friend’s testicles swelled up after receiving a vaccine. How many times did that get retweeted and shared? Lots of times. But mostly because people were pointing fingers at the obvious misinformation and laughing about it.
What matters is also the context in which you come across a piece of misinformation.
There is definitely a social element to the uptake of misinformation. Think about the recent news on people taking the horse dewormer ivermectin to cure COVID-19. That is almost a uniquely American phenomenon, and it is very specific to a right-wing and conservative political context.
These people did not just casually stumble across a story on horse dewormer while having their morning coffee and then decide to try it. They were given that story in a particular political and social context.
As part of our study and as an experiment, we created novel false news in order to minimise this context.
Furthermore, there is evidence that just having seen a story several times increases your level of belief in it. If you hear a story over and over again, you will eventually assign a higher truth score to it. What we wanted to see with our experiment is, if you have a single exposure to a false news story about something that you should do health-wise, what will that do to your behaviour?
What kind of stories did you design?
In May 2020, when contact-tracing apps were not yet available, we built a story around the Irish contact-tracing app. We said that the developers had links to Cambridge Analytica and that there were privacy concerns, which of course was not true. We made it up, but it was very plausible. A lot of people believed that it was true. And a lot of people said they had heard it before, which are false memories.
Exposure to that story reduced their intention to download the app when it became available. Those effects were enhanced if people falsely remembered having seen this story before. I should mention that we obviously debriefed people after the experiment.
How big was the effect?
Not every story had effects on the participants’ behaviour. A story that claimed eating chilli peppers or drinking juices would reduce COVID-19 symptoms had no effect at all. After being exposed to that story, people did not show stronger intentions towards eating spicy foods. Across the stories, we did find effects, especially with the more plausible stories, but the effects were small and inconsistent.
But even small effects can cause huge ripples. Think of vaccination, where a small difference in vaccine uptake determines whether or not we have herd immunity. In the UK, in the mid-2000s, there was a 10% drop in vaccination rates that can clearly be linked to a poorly conducted study by Andrew Wakefield that suggested that the measles, mumps, and rubella vaccine may predispose to behavioural regression and developmental disorder in children. The drop was, in turn, associated with a reduction in herd immunity and a spike in measles cases. So there are clear behavioural consequences. And the effect also depends on who gets exposed to false news.
Are some people more prone to forming false memories than others?
We found that differences in analytical reasoning are in part responsible for forming false memories after exposure to false news.
This has nothing to do with intelligence. It is rather a cognitive style: do you tend to reason slowly, analytically and systematically? Or do you rather make quick and intuitive decisions? Everybody can do both, but we all have a tendency to either swing one way or the other. We found that people who are more likely to reason analytically are less susceptible to false news and forming false memories; whereas if you are someone who reasons intuitively, if a story feels true and is in line with your ideology, you are more likely to form a false memory out of it.
Analytical thinkers can stop this automatic thought process and ask for evidence, even if they would like a story to be true.
How do you test that ability?
We give people a series of word problems to solve. Here is one. Emily’s father has three daughters: April, May and – what’s the name of the third daughter? The intuitive answer that springs automatically to mind would be June, but the correct answer is Emily.
Let’s try another one. How much dirt is in a hole that is three feet long, two feet wide and three feet deep? You will get a lot of people trying to multiply the values and calculate the volume. But there is no dirt. It is just a hole.
Even if you get the answers right, you will notice it takes you longer to pause your gut feeling and reason analytically. These tests are good indicators of people’s reasoning styles.
What could stop us from drawing such hasty conclusions?
There is some evidence that we can nudge people into a more critical mindset. When you come across a news story, just asking you ‘how accurate do you think this is?’ would kick you into a mindset of thinking about accuracy, and you would be less likely to fall for false news subsequently. Your answer of how true you actually think it is is irrelevant.
Introducing a small amount of friction that still leaves you a choice also helps. For example, Twitter now asks you if you want to read the article behind a tweet before you share it. The choice is still yours, but that automatic behaviour is interrupted.
Then there are warnings issued by the media, fact-checkers and social media platforms. How well do they work?
We found that generic warnings do not have any effect at all. In Ireland, there was a big campaign during the COVID-19 pandemic that asked people to be media-smart. It came in various formats, even as radio advertisements. The campaign was well intentioned but just not effective.
On the other hand, what does work is pre-bunking specific stories. The idea of pre-bunking is that you warn someone in advance that the story they are about to read is not true. They will read it with that in mind and just scan the story more quickly.
That sounds good, but can we afford to pre-bunk every story?
You cannot get a little warning for every single story saying it might be false before you read it. Organisations can fact-check only a certain number of stories, and usually those are the ones that gain the most traction. It is really hard to do, and we cannot do that forever.
When the social media companies choose to act, they do so selectively. They flagged false news in the US elections but not in many other elections around the world. That is why we are trying to shift the audience’s cognitive styles, moving towards the idea of changing people’s automatic way of engaging with the news, so that they do not have to be consciously aware every time that a story might be false.
Greene, C. M., & Murphy, G. (2021): “Quantifying the effects of fake news on behavior: Evidence from a study of COVID-19 misinformation“. Journal of Experimental Psychology: Applied.