Making the medical imaging pipeline smarter

Daniel RückertAn interview with Prof. Dr Daniel Rückert, Alexander von Humboldt Professor for Artificial Intelligence in Medicine and Healthcare at the Technical University of Munich (Germany). In 2020, he received a EUR 2.5 million European Research Council (ERC) grant for the five-year project ‘Deep Learning for Medical Imaging: Learning Clinically Useful Information from Images’, which runs from January 2021 to December 2025.


What is the aim of your ERC project?

The aim is to make medical imaging smarter by using artificial intelligence (AI). Smarter in the sense of how we extract better and more quantitative information and how can we make the imaging time shorter so that it is easier for the patient to tolerate. The grand vision that I set out in the ERC grant proposal is to optimise the whole imaging pipeline so that the patient gets the most benefit from the information we acquire while using medical imaging for diagnosis.


What does the imaging pipeline look like?

Traditionally with medical imaging – for example with MRIs – you start with the image acquisition. You acquire data, but it is not directly interpretable by a human. Next is the reconstruction of the data: the data acquired is transformed into an image that can be interpreted by a human. The next step in the pipeline is the analysis: for example, measuring the size of a tumour. And the final step is the interpretation of the image by a doctor, which leads to a diagnosis. In traditional medical imaging there is no interaction or feedback between the different steps in this pipeline.


So, what do you propose should change?

We are working on a close coupling of all four steps in the pipeline: acquisition, reconstruction, analysis and interpretation. This has many advantages. Take the example of a patient moving in the scanner. Typically, you can’t use this data because of motion artefacts. But, with the coupling we want to achieve, you can immediately go back to the acquisition step and see if you can get more data. Or, you can go back to the reconstruction step from the analysis and say, “We need a better reconstruction”.


What is the role of AI?

Actually, we use AI at every step, and more specifically, deep learning, which is a technique based on neural networks. Traditional deep learning methods cannot deal with some of the data we have, so we have therefore developed our own deep learning methods. As the ERC project only began in January 2021, we don’t have concrete results yet. But, of course, we do have some earlier results on which we are presently building. One of these results shows that our deep learning methods can reconstruct the image of a beating heart six times faster than traditional deep learning methods. And we get much better image quality than the blurry images you traditionally get if you collect the data quickly.


What comes next?

The logical next step is to optimise the images for the next stage in the pipeline: analysis. Can we do the analysis completely automatically? An AI algorithm might automatically identify certain structures of the heart, like the left or right ventricle, or it might automatically measure the size of the heart or how much blood the heart is pumping. Maybe in the future, it will even be possible to go directly from the image acquired to interpretation.


Wouldn’t that lead to a black box problem: where a human doctor no longer has any idea why the computer comes up with a certain interpretation or diagnosis?

That’s a very good point. Therefore, the notion of trustworthy and reliable AI is explicitly part of our ERC project. I suspect that we will always produce an image alongside the output of the AI system. That image is something that doctors can use to check whether the black box diagnosis is correct. We also want to quantify the level of uncertainty of the deep learning models. The system itself might even suggest that it’s necessary to acquire more data to reduce uncertainty until it has enough confidence in its analysis or diagnosis.


The ERC project lasts five years. What do you think is realistically achievable in that period?

We have funding for five researchers. Three of them have already started, and the other two will start later this year. I think that it is realistic that we will be able to show the feasibility of our approach and make the neural networks in our deep learning models more trustworthy and less of a black box. And I think we can show this feasibility through one concrete demonstration, like measuring structures in the heart or measuring a tumour. Such a demonstration might get industry interested in it. But I do not expect that our approach will be immediately generalisable to all kinds of applications. Some people are sceptical as to whether you can automate the whole medical imaging pipeline, but I think that even if we don’t reach that ultimate goal, we will still make a lot of improvements in medical imaging along the way.


Related EU projects
EuCanImage
ProCAncer-I
INCISIVE

  1. PET can detect cancer in the body at an earlier stage than CT or MRI scans.

    Reply

  2. Radiology refers to several tests that capture various images of the different parts of the body with the help of several types of Electromagnetic Waves.

    Reply

Leave a Reply