Interview with Nicolas Spatola, leader in research on social robotics, AI, and algorithmic bias at the CNRS Laboratory of Social and Cognitive Psychology in University of Clermont Auvergne.
In its draft report on AI, the European Parliament underlines that the development of AI will bring substantial changes in the work environment: do you think that this, in itself, represents a potential challenge for democracy?
Nicolas Spatola: I am convinced that AI will challenge democracy for two reasons. First, as mentioned in the resolution, there is a consensus around the need of having ‘guiding ethical framework based on (…) existing ethical practices and codes’. However, what is ethics? If we start to rely on AI, do we want it to promote well-being? Equality? Profits? Ethics is a concept that is culturally and socially defined, so it seems unlikely that we will reach a consensus on that topic. Second, there is a huge blind spot on AI bias. Several experiments have shown that AI could reproduce human cognitive bias such as sexism (e.g., GloVe), racism (e.g., Northpoint AI) and even become psychopathic (e.g., Norman). These issues are well studied in socio-cognitive sciences while this research domain is often forgotten in reports about AI uses and ethics.
In your works, you speak in favor of the development of a strategy of pedagogy about digital tools, and especially about artificial intelligence tools: why is such an effort necessary? What are your suggestions on how to build such a strategy?
Nicolas Spatola: I don’t believe in the conceptualisation of AI being something that society can accept or reject. Imagine that all cellphones stop working right now, or the internet, or transport? How would you react? Could you maintain your life as it was? Obviously not. We evolve through technology that changes our behaviours, and these new behaviours create new technologies. Now, for positive coevolution, there is a need for mutual benefits between society and technology. A mutual profit only worked through a mutual understanding of both sides. Adaptation of citizens implies action by political leaders. I can accept that some people being left behind in this technological revolution. Technology may reduce inequality, but it can also increase it. The necessary changes to education must start from a very early age and must be guided by a political will.
Do you think that if citizens were more knowledgeable about AI then this would help developers to deliver AI tools that were more in line with the core values of democracy? How could this work, also in relation to tackling bias? (Especially on bias)
Nicolas Spatola: On one hand, people who should take part in the development of ethics, morality, and the protection of democratic values in AI should not be people who build AI. The reason is simple; they cannot be objective (e.g. for economic interests). However, they should be available as experts to answer questions of ethics experts who work on that topic. On the other hand, it is necessary, and I want to insist on that, that researchers from all scientific fields take part in the development of ethics and morality. One cannot understand social impacts without sociologists, one cannot understand cognitive bias without cognitive psychologists, and one cannot understand the economic impacts without economists. To say that seems obvious but often forgotten in committees.