Interview with Anja Bechmann, Professor of Media Studies and Director at DATALAB – Center for Digital Social Research.
Regarding SOMA, your research will look at psychological principles relating to disinformation. Can you elaborate on that?
Anja Bechmann: The ‘fake news’ discussion revolves around the underlying assumption that if we prevent ‘fake news’ from occurring, our democracy will be secured. However, research shows that people only share news articles if the article supports their own existing political conviction. So if that’s the underlying logic, we as a research team need to focus on entirely different psychological principles requiring the development of new methods to ensure a fruitful communication forum for discussions that support democracy.
You did some research on Danish Facebook users and ‘filter bubbles’. Can you explain the results of that study?
Anja Bechmann: Looking into a two-week-long dataset from Facebook’s newsfeed, we found that 10 to 27.8 % of a national sample (a sample mirroring the overall Danish Facebook population) is not exposed to the same links (10 %) and is not discussing the same issues (27.8 %) as the other participants in the sample. The two ‘bubbles’ do not overlap in terms of participants, however, and we see no significant isolated effects of age, gender, education or area of residence. We do, however, see a significant effect of what we call sociality (number of friends, number of group memberships, number of page likes). We also see more males in ‘filter bubbles’ because they are not as good at socialising as females.
Your research includes probes into how digital intermediaries are mining data and the concept of data intraoperability – dominant software providers trying to collect all the available data in their central ecosystem.
Can you say what has changed over the last couple of years and what are the blind spots in policy and public debate?
Anja Bechmann: What has changed is that Facebook and Google have altered permissions so that they can mine data across services. This adds to the complexity of the data profile and of course has attracted the attention of the EU, leading to antitrust allegations.
Another aggressive policy is the General Data Protection Regulation (GDPR). Privacy has been the overarching legal and ethical concern when it comes to platforms/digital intermediaries. This has been at the cost of transparency, freedom of science and democratic ideals such as the right to information. Ironically, the GDPR has made it difficult to actually know if privacy breaches or potential privacy harm takes place on platforms – especially Facebook – as independent researchers no longer have permission to run analyses of data under regulated circumstances. What was designed to protect privacy is actually now potentially harmful to the privacy of individuals and groups.
What immediate changes do you think we need in terms of data governance and regulation? Can we hope for global coordinated movement on that front?
Anja Bechmann: We need to focus on the fact that the knowledge of our society and university-based research could play a role in data analysis, via the monitoring of information and communication forums that are the basis for public discussion in a digital society.
There needs to be a much more balanced ethical discussion on fundamental rights and democratic needs, and we ought to start the discussion on how to make sure that our public communication spaces follow democratic principles and define if and how we make sure that when communities or influencers reach an audience of a certain size, the discussion is no longer private – even though crowds sometimes communicate in encrypted spaces.
You were a member of the High-Level Expert Group on disinformation (a group of experts appointed by the European Commission to advise on policy initiatives to counter fake news and disinformation).
Do you think any key stakeholders were not included in the group?
Anja Bechmann: Yes, the advertising industry.
You wrote a paper on the hidden layers of knowledge production in AI on social media. Can you explain some of its conclusions? Do you think we have to start seriously talking about AI and algorithmic regulation?
Anja Bechmann: I hope we will. Our article argued that we need to focus more on the work processes that surround applied AI in order to fully understand and regulate the political power held by developers and companies in an AI-driven society. We also argued for the need to implement solutions that work before something critical happens and work with value accountability by design.
What about the news? Are you concerned about how algorithmic structures used by social media companies are changing the concept of news and how we receive it?
Anja Bechmann: This is not a simple question as it is a circular system. What we found was that a lot of clickbait surfaced in Facebook’s newsfeed in 2014. We also know from earlier studies that self-selection tends to be very strong compared to algorithmic selection, meaning that people will click on things they find interesting, different and sensational. We have a media economy that is under pressure, so the battle for attention that can drive advertising is increasing and thus may lead to the use of problematic methods. So this is not a question that can be answered easily.