As the world grapples with the coronavirus pandemic that found most governments around the world unprepared, Covid-19 related mis- and disinformation threatens to exacerbate a complicated challenge of global proportions.
In South Africa disinformation has already posed health risks by urging people to avoid being tested and a viral video suggesting protective masks and gloves were actually making people sick was viewed millions of times before tech companies took it down.
The over-abundance of information surrounding the pandemic – labelled “infodemic” by the World Health Organization (WHO) – includes related mis- and disinformation adopting a variety of themes. Even though the most obviously harmful for the public is disinformation relating to immunity, treatment and prevention, a range of other approaches have sprung up to complement it. Second-order effects, such as the lack of connectivity, vital for vulnerable groups during lockdown or violent attacks against 5G engineers, are the outcome of conspiracy theories framing 5G technology as the cause of Covid-19 driving those believing them to attack telecom masts.
Disinformation in relation to the origin of the coronavirus – namely conspiracy theories suggesting the outbreak did not begin at an animal market in Wuhan, China – has become so prominent it has become an issue of public debate, with US and Australia asking for an independent investigation. False statistics of mortality and reported cases, opportunistic disinformation in relation to the pandemic’s impact on the environment or one-sided framing of facts to provide a specific, politically driven view of the crisis have also been observed as strategies of Covid-19 related disinformation.
Dr Julie Posetti, Global Director of Research at The International Center for Journalists and UN consultant on disinformation : “One of the risks we face is there will be an attempt to treat deadly health disinformation differently to political disinformation despite the destabilisation of democracies and the loss of life that are also linked to false content of a political nature that goes viral”
Different countries may be facing a different mix of themes. After researching disinformation trends in Italy, France and Spain for instance, the EU DisinfoLab identified the origins, the cures for the virus and its co-option to advance other secret agendas at the most prevalent.
Increasingly worrying has also been the co-option of the coronavirus by the far right, to promote its own xenophobic narrative by targeting either the Chinese diaspora or other minorities. Anti-Muslim and racist narratives have also been embedded in coronavirus related disinformation, with the problem reaching such vast proportions that United Nations chief António Guterres stated that the pandemic unleashed “a tsunami of hate and xenophobia, scapegoating and scare-mongering”. Conspiracy theories vilifying the ‘elites’ as projected onto the personas of Bill Gates and George Soros have also been recorded by a research conducted by US based Graphika.
“Covid-19 has created an unfortunate convergence of many conspiratorial groups: from anti-vaccination to the QAnon community and other anti-science groups like climate denial advocates. These groups frame and misrepresent the issue to fit their ideological goals,” says Erin McAweeney, senior analyst at Graphika. “There’s been a lot of effort from bad actors to undermine expertise, which is evident in widespread calls to fire Fauci and conspiracies around the WHO.”
Adding to the toxic mix is the political spin and a sometimes ambiguous or confusing media coverage that leaves the public craving for answers. Interviewees in a Cardiff University study in the UK for example seemed to be good at spotting disinformation about the pandemic, instead they criticise confusing journalism and contradictory information from government.
According to McAweeney, accounts rebranding as coronavirus “news sources” is one of the many strategies bad actors are using to manipulate the conversation.
Is the coronavirus pandemic the “tipping point?”
While the stand-off between tech companies and governments in regards to disinformation and how to bring it under control was gaining momentum, with the EU setting a clear course towards regulation and even avid proponents of the First Amendment coming to terms with the fact freedom of speech does not mean a free for all, the coronavirus outbreak hit the news, with nations across the world reporting increasing numbers of infections and consequently, fatalities.
Compared to their previous reluctance to regulate political, election or anti-vaccination related disinformation, the tech companies’ reaction this time was swift. In March, a group of companies including Facebook, Google, LinkedIn, Microsoft, Reddit, Twitter, and YouTube announced they would work together to address Covid-19 disinformation.
A common thread across most companies’ measures to tackle Covid-19 disinformation is their redirecting of users searching for coronavirus related material to sources determined as authoritative. Pinterest, the first tech company to take decisive action in terms of anti-vaccination disinformation last year, started providing a “custom search experience” for people querying for coronavirus and directing them to information from the WHO.
Google and its subsidiary YouTube, also direct queries to authoritative sources, although an investigation by the Tech Transparency Project revealed there were still videos peddling disinformation that were falling through the cracks.The CEO of YouTube has since announced the company would be taking down coronavirus related videos that contradicted official WHO guidance.
Twitter introduced a new policy to remove content that contradicts public health advice while similarly to Google, it also directs user searches of Covid-19 to authoritative sources.
Stephen Turner, Head of Public Policy EU/Belgium for Twitter : “World leaders have outsized influence and sometimes say things that could be considered controversial, but a critical function of our service is providing a place where people can openly and publicly respond to their leaders and hold them accountable.”
Facebook, called by an Avaaz report “an epicentre of coronavirus disinformation”, has also implemented changes such as pinning public health warnings at the top of users’ news feeds , launching a Coronavirus Information Hub, surfacing educational pop-ups and labelling non life threatening disinformation, such as conspiracy theories, as false through their cooperation with third-party, IFCN accredited fact-checkers. Instagram, owned by Facebook, is also directing queries to authoritative resources, while its other subsidiary, WhatsApp has limited the forwarding of texts that have been shared more than five times, allowing only the forwarding to a single chat from that point onwards.
TikTok, the Chinese video-sharing platform that has become popular in the west, is also directing users searching for coronavirus towards WHO guidance.
Nevertheless, the efficiency of tech companies’ policy changes have been challenged by a series of investigations. An Avaaz report using the CrowdTangle platform revealed that over 40% of coronavirus related disinformation remained on Facebook even after it was debunked by fact checkers. Avaaz researchers discovered it took Facebook an average of seven days to take down debunked stories, a delay that meant millions of people could have been exposed to them before their takedowns. Following the report, Facebook moved on to act on one of its recommendations, when it announced it would notify users of exposure to health-threatening disinformation.
Luca Nicotra, Senior Campaigner at Avaaz : “It is time social media platforms take full responsibility for their recommendation algorithms. Content creators should have freedom of speech, not freedom of reach.”
Another investigation, conducted by Markup, revealed that Facebook was operating a “pseudoscience” targeting category that could target an audience containing 78 million users. Again, the tech company took action to discontinue that category after the report.
A crucial issue that concerns all platforms is the disinformation stemming from political figures. According to Politico, Facebook is still not ready to curtail disinformation disseminated by political leaders even when it jeopardises public health for example.
Oversight and trade-offs
If this ‘infodemic’ proves one point, is that the problem of disinformation is nowhere near resolved. In the beginning of 2020, the global information ecosystem had pre-existing vulnerabilities upon which Covid-19 as a health, political and economic issue, was allowed to attach and fester.
Unfortunately the constraints of the measures taken are revealed through the work of civil society and researchers rather than some form of official oversight body. Apart from their inherent limitations, measures such as the prevalent automated content removal employed in the absence of moderators who needed to self-isolate may have impinged on users’ rights such as freedom of expression. Article 19 of the International Covenant on Civil and Political Rights defends freedom of expression but with the protection of public health as a restrictive factor. Still, measures need to be necessary, proportional, and non-discriminatory.
The European Commission is also planning to take action according to the Vice-President and Commissioner for Values and Transparency, Věra Jourová:
Věra Jourová : “The Covid-19 pandemic is a stark reminder that disinformation can harm the physical and mental health of EU citizens, impact public safety and undermine our core values and destabilise societies. We, the EU and its governments, have to get better in detecting and exposing it. Platforms must step up their efforts and open up so their actions can be assessed. Together with HRVP Borrell, I am working on the new initiative for the beginning of June to tackle the covid-related disinformation.”
In the new efforts it would be useful to consider an issue that the UNESCO policy report highlighted, the fact that as it stands, oversight of all the tech companies’ measures is lacking, so an independent evaluation of their efficiency is impossible. Either because of data privacy concerns or protection of their algorithmic systems and business models, tech companies have been very protective of the data researchers have been calling for to tackle disinformation by initially bridging cause and effect. Recently, as part of their ongoing efforts, a group of signatories including Access Now, the Committee to Protect Journalists, and the EU DisinfoLab have asked tech companies to at least store data on the disinformation posts they remove to enable to fulfill their mission of protecting the information environment.