Site icon European Science-Media Hub

A scientist’s opinion: interview with Sonia Contera on generative AI in science

Sonia Contera interview: Artificial intelligence in medicine. Laboratory worker and illustrations of DNA, double exposure. Banner design

Spanish physicist Sonia Contera is professor at the University of Oxford (UK) and specialises in physics at the interface of biology, nanotechnology and information processing. She will be a keynote speaker at the STOA workshop ‘Generative AI and scientific development’ on 29 April 2025.


We’ve heard a lot about how artificial intelligence (AI) is transforming science. Could you give an example from your own research?

Sonia Contera: AI has been around since the 1950s, so it’s not such a new thing. The only thing is that in recent years, we got better computers. The new generative models have more capabilities.

So, for example, I’ve started to play with it in materials research. We are making net zero concrete, to reduce emissions when we produce concrete for buildings. At present, buildings are about 40% of the carbon emissions of the world. You can get AI to learn the structures of concrete and then help you to optimise central parameters to reduce emissions.


What are the advantages of using AI in science?

Sonia Contera: AI is helping to analyse and classify data. We can enhance existing ways of simulating problems with AI. It’s not difficult, which increases the uptake. If you are computer literate, it’s quite simple to start playing with these programs. You can download them from online repositories, like Python libraries and so on. It’s a great tool!

AI can also help a lot in repetitive tasks, together with robotisation. For example, biological research is extremely tedious and repetitive. You don’t put any creativity in doing the same things again and again. It’s all these biomolecules… if you robotise that it can accelerate a lot of processes.


What are the risks of using AI in science?

Sonia Contera: The main problem of generative AI is that it intrinsically generates error. There’s always about 15% errors in whatever you produce. It’s difficult because the AI itself does not know when it makes an error or not, so you need to supervise the results.

Another risk is related to a problem with the way we measure success in science. One of the metrics of success is the number of publications, which is a crazy thing in itself: we give people points for publishing, so people game the system. In this context, AI is making a mess. With AI, you can churn papers. So if the publication world was broken before, it’s going to be chaos now. Maybe this will just explode it.

There are also risks with data ownership. Increasingly, reviewing itself is done by AI. Scientists are overworked, constantly asked to review proposals. So of course, people are just putting these proposals into ChatGPT, which is getting all the information. We are giving away our IP. It is a huge risk. When you store your proposal in a cloud repository, you don’t know the terms and conditions. Do they own it?


Do you think that AI could ever rival human scientists in creativity?

Sonia Contera: If the world was digital, in a nightmarish world of complete top down control, in which all processes are automatised, then yes. But we know those systems lead to death. Death of societies and deaths of people. The world is not digital. The AI does not know why we need something. It’s just giving you ideas. It’s us who link them to the ideas of other people and of the world. So, I don’t think we’ll be substituted.

But I do find it useful to brainstorm ideas. LLMs can help you with the creativity process, by quickly summarising ideas that were already floating in the world about something. Even when it makes a mistake, it can be useful. It’s like an additional dialog with someone.

How does scientific discovery occur? Testing all possibilities is impossible, because reality is infinite and continuous, it’s not digital. A richer ecosystem in which AI, humans, and robots interact with different ways of funding is probably the best way to improve creativity and new things. That’s how biology works, creating a big mess and letting it go, and then one thing will be different. You can see it with generative AI. When we were trying to control the process, every computation was digitised, we wanted to control every step in the computation. Computers could not talk. It was only when we did not control it that they learnt to.


What barriers do you think that European scientists face in using AI for science?

Sonia Contera: European science is extremely conservative. Bringing in new ideas is almost impossible, because there is a low tolerance for risk. As a result, people escape the conservative part of academia to bring their ideas to the world with the money that is available. And that’s one of the reasons people go to American VCs, for example, because they’re more willing to take risks.

In Europe, we’re in big trouble because we have an enormous reluctance to let creativity free, to break, even though the whole system is breaking by itself. We are still repressing the creativity and the diversity that we have in Europe. We are actually producing risks by being so conservative, by not being able to adapt.


What can be done at the EU level to unleash AI and mitigate the risks?

Sonia Contera: EU policymakers should talk to companies and scientists, and make an environment where it’s easy to invest in risky projects. We have a very diverse continent, with people from all different cultures. That should be able to create an ecosystem with lots of ideas. But when you are very creative, you leave Europe. There are lots of opportunities for people that want to disrupt, but we need money. So we need companies to get involved, not only scientific institutions. We need to create an ecosystem with different types of funding, a space where people can bring new ideas. Because otherwise we will not keep up. We are not keeping up anymore.

The EU is in deep crisis. We’re in a crazy, competitive world, and I hope the EU realises this. There needs to be a strategy: the EU needs to look at the world and see what we can excel at, and then promote it.

But the EU needs to stay guided by democratic values. Policymakers should focus on what is good for actual citizens in Europe, not just for themselves or not for the companies that are lobbying.

Related article

Exit mobile version