A scientist’s opinion : Interview with Stephen Turner about Infodemic

The infodemic, a scientist’s opinion

Interview with Stephen Turner, head of Public Policy EU/Belgium for Twitter. Stephen manages the company’s relations with EU regulatory bodies, policymakers, NGOs and civil society organisations across all Twitter issue areas – including freedom of expression, transparency, disinformation, illegal content, consumer protection, and privacy and data protection.


Twitter’s measures to tackle disinformation were monitored by the European Commission under the Code of Practice. Were there any lessons learned from that experience that you can extrapolate to fight coronavirus related disinformation?

Stephen Turner ESMH ScientistTwitter has a zero-tolerance approach to the artificial amplification of public health misinformation as well as any attempt to abuse or manipulate our service. We continue to invest in detection tools and technology to combat malicious automation and manipulation of our service. We adopted the following measures to further counter misinformation on our platform:

  • Increasing our use of machine learning and automation to take a wide range of actions on potentially abusive and manipulative content.
  • Building systems that enable our team to continue to enforce our rules remotely around the world.
  • Instituting a global content severity triage system, so we are prioritizing the potential rule violations that present the biggest risk of harm and reducing the burden on people to report them.
  • Executing daily quality assurance checks on our content enforcement processes.
  • Reviewing the Twitter Rules in the context of Covid-19 and considering ways in which they may need to evolve to account for new account behaviour.

The value of the Code of Practice is that it builds a foundation for cooperation and trust building between institutions and industry. This makes it an adaptable tool to address and react quickly to emerging issues and challenges.


A policy brief published by UNESCO recently with regard to Covid-19 related disinformation highlighted the lack of oversight mechanisms to evaluate the effectiveness of the measures taken by tech companies. Indeed, even during the EU Commission’s monthly reporting regarding disinformation, most figures provided referred to bulk take downs or accounts challenged without detailed analysis of which malign accounts posted what, and more crucially, how many users had seen the content before the take down. Is Twitter in talks with regulators about a more meaningful oversight mechanism that can evaluate whether the measures announced are fit for purpose?

Since 2006, Twitter’s API has given researchers and developers the opportunity to tap into what is happening in the world. Twitter firmly believes in open data access to study, analyse, and contribute to the public conversation. Our service is the largest source of real-time social media data, and we make this available to the public for free through our API. No other major service does this. All of our API data is public – no private user data is included. So it is Tweets, bios, who you follow, Tweets you have liked etc. No email addresses, IP data etc.

Recently, Twitter launched a Covid-19 stream endpoint to enable researchers to study the public conversation in real-time. The dataset covers many tens of millions of Tweets daily and offers insight into the evolving global public conversation. Transparency is a core value of the work we do at Twitter and providing access to data and information is key to ensuring a better understanding of how conversations and global movements take place on the platform.

Since October 2018, Twitter has maintained a public archive of state-backed information operations – the largest of its kind in the industry. The archive is continuously updated and has been accessed by thousands of researchers from around the world, who in turn have conducted independent, third-party investigations of their own.


What is Twitter doing in relation to Covid-19 disinformation posted by political figures or political organisations? Your policy states: “If a Tweet from a world leader does violate the Twitter Rules but there is a clear public interest value to keeping the Tweet on the service, we may place it behind a notice that provides context about the violation and allows people to click through should they wish to see the content”. Who within Twitter is the judge of what constitutes the public interest and how is this negotiated? How would Twitter respond to a post by a public figure endangering public health?

We assess reported Tweets from leaders against the Twitter Rules. In cases where world leaders violate the Covid-19 guidelines, we may apply additionally the public interest notice. By nature of their positions, leaders have outsized influence and sometimes say things that could be considered controversial or invite debate. A critical function of our service is providing a place where people can openly and publicly respond to their leaders and hold them accountable. With this in mind, there are certain cases where it may be in the public’s interest to have access to certain Tweets, even if they would otherwise be in violation of our rules. On the rare occasions when this happens, we’ll place a notice – a screen you have to click or tap through before you see the Tweet – to provide additional context and clarity. We’ll also take steps to make sure the Tweet is not algorithmically elevated on our service, to reduce the potential harm caused by these Tweets.

A cross-functional team within our company including Trust and Safety, Legal, Public Policy and regional teams will determine if the Tweets are a matter of public interest based on the following considerations:

  • The immediacy and severity of potential harm from the rule violation, with an emphasis on ensuring physical safety;
  • Whether preserving a Tweet will allow others to hold the government official, candidate for public office, or appointee accountable for their statements;
  • Whether there are other sources of information about this statement available for the public to stay informed;
  • If removal would inadvertently hide context or prevent people from understanding an issue of public concern; and
  • If the Tweet provides a unique context or perspective not available elsewhere that is necessary to a broader discussion.


Twitter’s policy also states: “Going forward and specific to COVID-19, unverified claims that have the potential to incite people to action, could lead to the destruction or damage of critical infrastructure, or cause widespread panic/social unrest may be considered a violation of our policies.” Can you be more specific in terms of what “actions” you are trying to avert?

This refers to specific and unverified claims that incite people to action and cause widespread panic, social unrest or large-scale disorder, such as “The National Guard just announced that no more shipments of food will be arriving for 2 months – run to the grocery store ASAP and buy everything!” or “5G causes coronavirus — go destroy the cell towers in your neighborhood!”. Hence, by “actions” we refer to violent actions offline like attacking a specific minority, attacking hospitals with Covid-19 patients, calling on people to willingly infect others, etc.


A report by the Reuters Institute for the Study of Journalism (https://bit.ly/35AZ4tk) found that in the sample they were investigating, 59% of the posts already debunked, remained online even after being determined as false by fact-checkers. Can you explain the lag time?

In response to Covid-19, we are prioritising the removal of content when it has a call to action that could potentially cause harm. As we have said previously, we will not take enforcement action on every Tweet that contains incomplete or disputed information about Covid-19. Rather than reports, we are enforcing this in close coordination with trusted partners, including public health authorities and governments, and continue to use and consult with information from those sources when reviewing content.

Since introducing these new policies on 18 March, we’ve removed more than 2,400 Tweets and challenged 3.4 million potentially spammy accounts targeting Covid-19 discussions. We have also prioritised surfacing authoritative information. For example, our localised event pages and search prompt feature help ensure that when you come to Twitter for information about Covid-19, you are met with credible, authoritative content at the top of search. We have been consistently monitoring the conversation on the service to make sure keywords — including common misspellings — also generate the search prompt. In each country where we have launched the initiative, we have partnered with the national public health agency or the World Health Organization directly. The proactive search prompt is in place with official local partnerships in more than 80 countries around the world.

Following on from our announcement earlier this year, where we introduced a new label for Tweets containing synthetic and manipulated media, this week we announced that similar labels will now appear on Tweets containing potentially harmful, misleading information related to Covid-19. We have kept an updated blog about all of Twitter’s updates, policies, and products around Covid-19.

Related article

Leave a Reply