Interview with Professor John O’Keefe about the future of artificial intelligence in science and society.
John O’Keefe, professor of cognitive neuroscience at University College in London (UK), was awarded the Nobel Prize in Physiology or Medicine in 2014. He won the prize together with May-Britt Moser and Edvard I. Moser “for their discoveries of cells that constitute a positioning system in the brain”.
“I always used to say to myself: I’m going to start worrying about computers when they are smaller than I am. And now we see the power of smartphones and how we have taken them on board. You shouldn’t worry about robots in the future – they are here already. The machines are here. We already interact with them in a very, very close and continuing way”, says Prof. O’Keefe.
Do you think that your research may be applied to artificial intelligence (AI)?
John O’Keefe: I think the relationship between machine intelligence, the development of machines which can carry out functions that can mimic or in some cases surpass things that human beings and animals can do, is a really interesting relationship and it goes both ways.
Over the years, we’ve watched the development of AI and of deep neural networks (certain machine learning algorithms aiming to mimic the information processing of the brain). These technologies are very good – and are advancing over time – at classifying objects and classifying pictures, being able to spot a picture of an animal in the jungle and do many of the very difficult perceptual tasks that in the past were only done by animals and humans. So that’s been a great success story.
Some of the thinking behind the development of these machines has been inspired by what we thought we knew about, or actually did know, about the brain. For example, these devices consist of neurons which are connected to each other and which can increase or decrease the weights between those connections and how important one neuron or another is.
While a lot of the early thinking on AI and deep neural networks was inspired by brain research, vice versa, those devices have been incredibly important in the development of brain research.
In what ways have digital technologies and AI been important for brain research? For better recording of brain activity?
John O’Keefe: Yes indeed. If we talk about animals or even humans, one of the greatest problems that we have had for many, many years was trying to put together behavior and changes in behavior, the things people or animals are doing, with brain activity. And now, over the years, we have developed very sophisticated devices for recording brain activity.
Now we can record from hundreds and thousands of cells at the same time in an animal which is doing various tasks.
What we haven’t been able to do until very recently is describe the animal’s behavior at a quantitative level, at a level which would enable us to see if there are for example, cells which are related to when the animals goes into a place or cells which are related to the animal interacting with another animal.
However, what AI has given us is techniques where we can take video films of the animals – for example two animals interacting, and then we can train the machine to identify each animal and to identify different parts of each animal’s body. So we can now say in very, very strict mathematical terms how far away the animals are, how far their noses are, even how they’re interacting with each other. So things that we have wanted to do for over 100 years and have been trying to understand: now we can do them and it’s not too difficult.
We train the machine to recognise the animals and what they’re doing. This way, we have descriptions of behavior which are at the same level of , and compatible with, our ability to record from cells. So we can actually use AI and it actually helps us to do brain research.
What do you think the future will bring?
John O’Keefe: Now AI to some extent is beginning to develop itself almost independently of our knowledge of the brain. There are some people, my colleagues for example, who are interested in seeing if we can use the way in which the machines learn to help us understand how the brain solves a task.
For example, in our case, if a machine can learn to find its way through maze in the same way a rat can, the machine can inform and tell us about how the rat might be doing it.
My colleague Caswell Barry has been looking at this. He has found similar patterns of activity in the machine or program to what we see in the cells in the animal’s brain. He’s able to see some of the same cell types, and then we can ask, well, how do they work in the machine, how do they work in the AI program and see if that’s the same way that we think they work in the animal. So we can use AI technology as a model of the brain.
You don’t seem to be upset about the rise of artificial intelligence.
John O’Keefe: It depends. Like any new technology it’s happening very quickly. We’re talking about something the first glimmerings of which were around 1950s, so that’s about almost 75 years ago. It’s happened very quickly. I think it’s always worrying when there are very disruptive technologies. It happened with the steam engine, it happened with the telephone, it happened with electricity. We’ve had these experiences many times in the past.
I think it is right to be worried about new technologies and try to predict how they’re going to change our societies and our individual lives. Quite often what happens is that if they’re disruptive in the short run, then they begin to settle down and we learn to interact with them, we learn to use them to our benefit. Nobody is afraid of electricity anymore and nobody is afraid of the steam engine or any of these things. Whether that will be the same with AI I don’t think we know. It’s hard to predict the future.
I think there are lots of stories that they’re going to take over from human beings. I suspect is not going to be what’s going to happen. I think we’re going to find that there’s a closer and closer interaction between human beings and these machines. We’re already interacting with these machines in a way which is almost unimaginable.
When I first started interacting with computers it was because I had a job to pull out little rectangular cards and give them to young ladies to type on and punch holes in and feed them into an IBM 360 machine. And that machine was a huge one, I never saw it actually. I always used to say to myself I’m going to start worrying about computers when they are smaller than I am.
And they quickly became smaller than you…
John O’Keefe: And now we see the power of smartphones and how we have taken them on board. I think we haven’t really thought very hard about it. But if you walk down the street and see everybody interacting with their smartphones, you shouldn’t worry about robots in the future – they are here already.
The machines are here. We already interact with them in a very, very close and continuing way. I think that what you will find is that the process will be more gradual. People get very excited about ChatGPT and things like that. I think there will be a more gradual change in the way we organise society and interact with machines as time goes on.
Do we need special regulation on AI? Do we need to think more about ethics in the interaction with AI?
John O’Keefe: That’s a good question. I think as with any technology we always find that the political thinking and the social thinking lag behind the technology and that was true in the early days of the steam engine or of the automobile. We took a while before we realised we had to regulate the use of these things. People started running into each other, running into people who were just walking by. So there’s no doubt that we will need some form of regulation as we begin to gain experience and see what the real threats are. I don’t have any doubt that it’s something that the politicians and we ourselves, who will deal with it, should be thinking about.
The problem is it’s very hard to predict – as Groucho Marx said, it’s very hard to make predictions, especially about things in the future. We just don’t really know where it’s going to go. So then we should be thinking about it, yes. And we should be thinking about what the dangers are and how they could be used for good, and not for evil.
