Online disinformation in elections, a scientist’s opinion
Interview with Kajsa Falasca, Senior Lecturer and Assistant Professor at the Department of Media and Communication Science (MKV) at the research centre DEMICOM at Mid University Sweden.
Prior to last year’s elections in Sweden, huge numbers of fake news items were posted on Twitter, concerning Islam and immigration among other issues. In your experience, how effective was this campaign?
There was indeed a lot of fake news and misinformation posted on social media platforms such as Facebook and Twitter, by a number of different stakeholders. That doesn’t necessarily mean that this misinformation was translated into support and votes in the actual elections. We have found that many people oppose these messages disseminated on social media. Many social media interactions are measured only in terms of numbers, and the content of the comments or discussions is not taken into account. It is difficult to evaluate the effect of a particular message. What we do know is that the political parties considered the use of social media as a forum for political communication on the 2018 national elections in Sweden to be of great value, as it was cost-effective and reached a lot of people. We did notice that a very small right-wing party, Alternative for Sweden, was connected to many of the anti-immigration and anti-Muslim messages. The party was very active on social media, but did not receive enough votes to get into parliament.
The majority of social media users are young people. Are they indeed the target of these campaigns?
Yes, young people do use social media a lot, but you also have to consider the large number of different social media platforms, which have different categories of users, especially during an election campaign. For example, politicians and journalists mostly use Twitter as a means of communication. Young people use social media more than the older generations, but the age groups on certain platforms are changing. For example, Facebook is now more commonly used by older generations, and political parties and politicians can reach younger voters more easily on Instagram.
The European Parliament elections are approaching fast. Will you be monitoring these elections? Are there any lessons to be learned from last year’s elections in Sweden?
My research centre, DEMICOM, produced a report about the Swedish national elections, involving about 100 researchers and several different approaches. We are now running a similar project for the EP elections, involving 60 researchers from all Member States. The results will be available quite quickly, 10 days after the elections. The project will focus on European politics, the campaigns and media coverage, and the use of social media and other digital media. It will also include an analysis of fake news, disinformation and misinformation. My own contribution will focus on how political parties use social media. I am looking into how they behave on social media, and how they perceive the effects of social media both before and after the elections. In Sweden, all the parties taking part in the elections are very concerned about being transparent during the campaign and are committed to not using misinformation.
Last September, Facebook, Google and Twitter signed up to a voluntary code of conduct. The European Commission criticised them about a month ago for not living up to their commitments, such as the fight against disinformation. What is your view on this?
This is a crucial question for future democratic public discussions and debate. The giant tech companies often discuss their responsibility for all kinds of conversations and content that have been displayed on their cacheboxes. During the last ten years, we have seen Facebook move from saying: ‘we are not responsible at all’, to saying today that they are concerned, that they are trying to take measures to regulate and control content, etc. I think this is very positive, because as we have seen, a lot of people are moving onto Facebook and using Facebook as a way of receiving information and having discussions and debates. At the same time, it is also easy to criticise these giant companies for not doing enough. I don’t have any statistics on the amount of resources these companies dedicate to controlling or regulating the content on their sites, but they don’t seem to have the ability to control it. This is a major problem, and the level of responsibility of these companies really needs to be discussed.