Could you please introduce yourself, and say something about your background?
I did my PhD in physics, but the topic was artificial intelligence, and embodied artificial intelligence. Then I did several postdocs on pure machine learning, applying this knowledge, and these techniques to environmental problems at Eawag (ETH, Zurich).
Then I moved out due to the prevailing academic culture of “publish or perish” with its pressure to output a lot of papers, but not necessarily of good quality. So I moved to Eastern Switzerland University of Applied Sciences where this pressure is minimised, there is little pressure to publish, and I can actually work on resolving problems which I like and on knowledge transfer. Here we can still do a little bit of research and teach.
Can you talk about what ‘memristors’ are in general? How would you introduce them to a general audience?
Juan Pablo Carbajal: Memristors: The idea is that you have a device, an electronic device which can remember its state. It has a value and this value depends on its history: on what you have done to this device in the past.
Memristors are called memory resistors, the value they remember is a resistance value. When you put a current through a memristor, you can increase or decrease its resistance. When the current stops, the value of the resistance stays there where you left it for some time. There is a crucial difference between types of memristors: volatile or nonvolatile.
Volatile means that these components will keep the resistance value that you left them on, but they will forget eventually and go back to some default value.
Nonvolatile memristors don’t do this. When you leave them in a value of resistance, they stay there forever, they never forget this value.
Why might memristors be a good choice for machine learning or AI applications? What’s the relationship between these technologies?
Juan Pablo Carbajal: There are many pros and cons. The type of AI that is used today is what we usually call disembodied AI. We run software on a machine that can represent basically anything.
We have a universal computer, a machine that you can program to do basically whatever you want, and the AI is realised as software running on that machine. It is completely decoupled from the real world.
This leads in general to very inefficient AI in terms of energy.
For example, even the most intelligent robots available, consume orders of magnitude more energy than their biological equivalents. I don’t know exactly, but the Atlas robot I think consumes in the order of 10 times more energy than a human, so it’s using like a kilowatt to move around instead of the more or less 100 Watts that we use as humans to live. It doesn’t do anything else. If you want to make it do anything else we do, you will expect this consumption to grow.
Nature is able to solve problems in a very cheap way. Our brain runs on a very specific hardware to, you know, do what we do in our life. Our mind and our brain and our body evolved all together, so they are completely entangled. Nature discovered some global fundamental principles that are independent of the hardware, but software and hardware are entangled in a symbiotic relationship. The body is not just a vehicle for carrying the brain around.
This entanglement between the software and the machine is actually something that historically we have been moving away from. In a way, we can bring them back together a little bit: the hardware and the software we want to run on it. In nature, [it seems that] neural systems are key for developing behaviours like the ones humans and animals have.
If a neuron is connected to other neurons, and these neighbours start firing (sending electric signals through their connections), then eventually this neuron will also start firing. It will stay in this firing behaviour for a while, but eventually relax when the neighbours stop firing. So, they forget in a way.
Neuromorphic chips are basically sets of neurons or [electroinc] models of neurons, and they are super fast. They consume very little energy. But, it’s very difficult to make them do things. Despite the challenge, there are already cameras and sound processors based on these principles.
The memristor is actually very, very old. It’s from the late 1800s. [The electrical] phenomenon has been known in materials science for a long time, but it had not been identified as something useful until recently.
How you going to use it for AI?
Juan Pablo Carbajal: First, consider nonvolatile memristors. They provide places where you can store things using very little energy. You can store values, and you can make these memories very dense. [So this means] more space for the same energy or the same space for a lot less energy.
There is an application for the volatile memristors too. They could have a potential big impact on machine learning and AI when we start looking at them as a sort of neuron: an artificial neuron. Here, we need the memristor’s relaxation behaviour: we need it to be volatile.
To make it brief, the impact of the nonvolatile memristors could be huge in terms of energy consumption, in terms of the cost of computer memory. But I don’t think that will bring us to new AI or to new machine learning methods.
Volatile memristors are a potential path that will bring us to new kinds of AI which is more bio inspired, more neuromorphic.
What is the link between current chips and these new neuromorphic ones? Can we adapt the technologies we already have, or do we need to start from zero in developing new ways to program neuromorphic chips?
Juan Pablo Carbajal: It’s a very good question. I don’t think there will be a big shift to move everything [to neuromorphic chips]. We’re going to go in parallel. We will adapt these new technologies to the production process that we have, and try to integrate these chips into classical CMOS technology, and I think this is good. We have a lot of infrastructure and experience.
On the software side is where we expect to have a paradigm shift.
If your software and your hardware are more entangled, we need to change the way we approach “programming”. This is not something that’s 100% new. With microprocessors 20 years ago, you had to do everything [manually]. Now you can program Python on them using an automated agent that takes your code and translates it for your hardware. Doing this is where the challenge is [for neuromorphic chips]. The result of this process will be that we have machines that have at least some aspects of what we identify as intelligent behaviour.
When I think about AI’s energy consumption, that power is used in big data centres – do you think that is where neuromorphic hardware will be taken up as a replacement, say, for graphics cards, or is it something that will be more at the end user’s level?
Juan Pablo Carbajal: Memristors-based electronics in general are very good for specialised hardware. They do one thing very well. That’s where you get the energy gains. You don’t have a machine that can do anything. You have a machine that can do one thing very well.
I don’t think they will be in-place replacement for GPUs, they will probably be like a little chip within those GPUs that now does something else, and of course takes part of the job out of the GPU. The GPU will, for example, outsource part of the visual processing to the neuromorphic unit.
So now the GPU will be doing less, and it will use less energy. Eventually, we will use that energy savings for something else, so we will end up consuming the same, but we will be doing more (a sort of Jevons paradox, which we need to think about to avoid further socio-ecological complications).
In a way, what you’re saying is quite philosophical – that the paradigm shift is more so in how we think about designing the machines that do the tasks, and shifting them more towards purpose built solutions.
But that is that is a very different model from, for example ChatGPT or something where you you’re trying to answer any question or do anything. But you’re saying that perhaps the most effective AI tools will be ones that are built to solve more specific tasks.
Juan Pablo Carbajal: ChatGPT and these tools are actually not general. They solve a very specific problem. It looks like it’s general, but it’s a very, very small problem, basically. Not that it’s not difficult.
The thing is, we don’t have yet a scientific or engineering methodology to tackle problems in an holistic way. It may be possible. We just don’t have a method to do that.
Our method of solving things is we break them apart, divide and conquer, and we build more and more complex tools by connecting these little problem solvers that we have. We’re going to build small chunks: little machines doing little tasks, and then we’re going to connect them and they will do more than the sum of their parts, but first we need to develop those components.
There is no reason to believe that ChatGPT can do everything. And what we call AI today is a rebranding because we are not doing anything different from what people called machine learning before, function fitting earlier, and data regression even before that. It’s all the same thing. We just have been renaming it over and over. You fit a line to a bunch of points and people call it AI.
The real challenges are hidden behind these different names. [For example, consider that we don’t have] widespread vision sensors that behave like eyes and perform like eyes. Your eyes have dynamic range. I just tried to show you the snow, right? And you couldn’t see it [through the camera] because it was white. But my eyes see it perfectly. So there’s something in that camera that is not doing what my eyes can do.
There is a neuromorphic visual sensor, and it almost has that, the sensation of a wide dynamic range. So that’s where we’re going to get closer [in terms of intelligence] to the hardware we see in nature. We are creeping into it, it won’t just happen tomorrow.
I wanted to ask you your thoughts on the evolution of the field over time. What kind of support will it need to achieve its goals?
Juan Pablo Carbajal: The question is, what’s your objective? If you are going to still be measuring performance or success in terms of billions of dollars or EUR or whatever, you should just be doing what you are doing now: we need more incubators, easier access for small companies to this technology. You will have more monetary production if you go this way.
How will we build these development platforms that we mentioned before? How do we represent these problems? We are still leaning very heavily on linear algebra and ordinary differential equations. Are those really the mathematical tools we need to solve these problems in AI? These questions are not being explored a lot. It’s also not easy (and it’s getting harder) to get funding to investigate these kind of fundamental questions.
[These types of questions need] public financing, and maybe we will learn from the past and set its goals in a better way. If you are developing a new technology, we should also consider the outcomes in terms of environmental impact, socio-political impact. We should be promoting a holistic view, especially on the outcomes, not just the outputs.
The output is your chip. That’s OK. But we should also look at the impacts it could have in broader, more interdisciplinary terms. To get the breakthroughs we need, I think the field needs this kind of support.
Is there anything you feel was not covered and you want to add at the end?
Juan Pablo Carbajal: Some people see Europe as a wasteful place (blocking innovation). I see it probably as the last bastion of what a multi-objective society can look like: not just, you know, focused on money or industry or whatever.
When you have this classical way of optimizing a product, you have one objective function, right? We know the problems that brings misalignment issues
In academic research, we have also seen this problem with regards to impact factors and the amount of publications being the only criteria for obtaining academic positions, and look where it brought us. I see a very wasteful academia. It’s adding more noise to the environment. This “one-way” of thinking is weakening some values that I think are core values of science.
I don’t have this very utilitarian way of wanting measure scientists in terms of how much they produce. I think we are slowly changing it.
The same applies to other activities of our society, and I think Europe probably in my view one of the few regions where we are trying to have more than just one interest in focus.

