Interview with Catelijne Muller, president of ALLAI, an independent organisation to promote responsible artificial intelligence (AI). She talks about her view on the public perception of AI.
Ask teenagers to unlock their smartphones, open their social media apps and then show everything openly to their neighbour, and of course they will say: “No way we’re doing that, are you out of your mind?!”
The example is given by Catelijne Muller, president and co-founder of ALLAI, an independent organisation to promote responsible AI. Muller tells that the example is taken from a theatre play: “My brother is a screenwriter for TV series and plays. In one of his interactive plays, he had included exactly that situation for an audience of teenagers. That to me is a good example of the role art can play in creating awareness of the impact of AI on our everyday lives.”
Muller was a member of the former EU High Level Expert Group on AI that advised the European Commission in recent years. Her contacts with policymakers, governments, businesses, universities and civil society organisations have allowed her to form a picture of various public perceptions of AI.
What is your view on the public perception of AI among these different groups?
Catelijne Muller: The picture is very varied. It varies by group and also a bit by country. But let me try to draw a few conclusions. In recent years, policymakers have become increasingly aware of the social impact of AI as well as its ethical and legal aspects. They also know reasonably well what AI can and cannot do.
I see that in the Netherlands, where I come from, awareness about the impact of AI seems to be slow to sink in. Even though the Dutch childcare benefits scandal [in which authorities wrongly accused an estimated 26,000 parents of making fraudulent benefit claims] is seen internationally as Europe’s largest algorithmic scandal, this does not seem to have fully awakened Dutch policymakers.
As ALLAI, we have also organised round table discussions in Germany, and I notice that policy makers there are better informed about the risks of AI and the human rights that are at stake than in the Netherlands.
And what about businesses and societal organisations?
Catelijne Muller: In companies, I see that the big ones know that they have to do something with responsible AI and with laws and regulations. They often have teams to map out how to deal with that. In small companies, the awareness is often completely lacking.
For civil society organisations, the picture is also mixed. More activist organisations like ‘Bits of Freedom’ or ‘Access Now’ are very comfortable with the topic. Social partner organisations however, lag a bit behind. I still hear the more simplistic rhetoric a lot: we have to keep innovating and keep growing.
Your organisation ALLAI now has an office inside the new AI building of the computer science department of the University of Amsterdam, LAB42. What is your view on the perception of the implications of AI at a place where so many scientists are developing fundamental AI techniques?
Catelijne Muller: Of course the scientists do great foundational work, but when I talk with students, I often hear that they are aware of the risks of AI, and also that some voluntary courses touch upon this topic, but that they lack a more structural approach from the moment they start.
How to achieve that?
Catelijne Muller: The key is a multidisciplinary approach in which computer science and mathematics are integrated with social sciences, humanities, and law. I know that this is being tried, but in practice it often turns out to be hard. Therefore, at ALLAI we have developed educational materials that can be used to teach about ethical guidelines for AI.
What else does ALLAI do in the field of the public perception of AI?
Catelijne Muller: We organise responsible AI knowledge and awareness programs for policymakers and societal and private organisations. Last October we organised the Responsible AI Conference as the official opening event of World AI Week 2022.
In November we also started taking part in the EU-funded project ‘AEQUITAS‘ which aims to build an environment in which the unfairness, biases or otherwise inequitable outcomes of an AI system can be assessed and repaired. Part of this project will, over the next three years, assess what the public perception is towards the concept of AI fairness.
Finally, how do you see the role of the arts in the public perception of AI?
Catelijne Muller: A few years ago every AI-story was illustrated with images like Robocop or Terminator. This is damaging for the public perception of AI, because it is so far away from reality. Luckily this has become much less.
On the positive side, the play for which my brother wrote the scenario, is a good example of what the arts can achieve. Another great art project to show the impact of AI, was called ImageNet Roulette, made by Trevor Paglen and AI-researcher Kate Crawford. The project allowed you to upload a photo of yourself and then it showed you how your image was labelled.
Apart from some correct labels, people also were labelled in a really worrisome way like ‘promiscuous woman’ or ‘criminal’. This project showed you how image recognition systems are trained and to what problems this can lead. So, yes, smart art projects can play a big role in the public perception of AI.
This is a good blog!
Muller discusses the public perception of AI, from a user’s perspective. She says that the perception varies by group and also by country, but that policymakers are increasingly aware of the social impact of AI and its ethical and legal aspects. She sees that awareness about the impact of AI in the Dutch childcare benefits scandal is slow to sink in in the Netherlands. Businesses and societal organisations are aware of the risks of AI and the human rights that are at stake. Civil society organisations are mixed in their understanding of the role of the arts in the public perception of AI. The role of the arts in the public perception of AI is still being tried, but is being done in a more structural fashion.