Interview with Prof. Karim Lekadir, Director of the ‘Artificial Intelligence in Medicine Lab’ at the Universitat de Barcelona (BCN-AIM). He co-authored a study for the European Parliament’s Panel on the Future of Science and Technology (STOA) entitled ‘Artificial Intelligence in healthcare: Applications, risks, ethical and societal impacts’ which he will present on 11 February 2022 at the STOA online workshop ‘Ethical issues in Covid-19 pandemic: The case of digital health applications’.
What is artificial intelligence (AI)?
Karim Lekadir: A general definition of AI is any set of computer programs able to mimic or even surpass human intelligence. For instance, machine learning aims to perform certain tasks learning from examples. Machine learning is closely linked to big data, because the more data is available, the better the algorithms can learn to perform the tasks in question.
What are the most useful applications of AI for medicine?
Karim Lekadir: In medicine, an AI system trained on clinical data might help to diagnose diseases earlier or to produce an individually recommended course of treatment based on past examples of medical diagnoses, treatments, patient follow-ups and outcomes.
Another promising application concerns the processing of complex data such as medical images. Some key information in medical images can go undetected because of small, undiscernible signals that a radiologist cannot easily see with the naked eye. However, an AI algorithm could be trained to better identify these signals and assist radiologists in making more accurate diagnoses. Moreover, AI could be used in healthcare administration and patient triage – prioritisation of medical care–personnel/resource allocation, and process organisation. Finally, AI could be the basis for home-based healthcare programs for self-management of medication/exercise needs, acting as a link between clinicians and patients.
Looking to the future, where do you imagine we might find AI in a hospital?
Karim Lekadir: AI has already made a big impact in medical imaging; where it could serve to automatically identify the organs or lesions of interest, which would save a lot of time for the clinicians. For more complex tasks such as diagnosis or treatment recommendation, more work is required to validate emerging AI solutions and to demonstrate their clinical safety. Furthermore, for the AI tools to be adequately used in future healthcare, clinicians will require training and skills in AI. As Curtis Langlotz, Professor of Radiology at Stanford University puts it: “Artificial intelligence will not replace radiologists … but radiologists who use AI will replace radiologists who don’t.”
What are the potential risks of AI use in medicine?
Karim Lekadir: First of all, the lack of clinical safety. Medical AI may make mistakes, which can harm the patient, due to problems with the data quality, differences in the clinical environment (very different patients in different hospitals), or human error (lack of proper training to use AI).
Second, there are ethical concerns such as potential discrimination against minorities and other population groups. AI outputs can be biased if the data source is itself biased with respect to ethnicity, sex, or other factors. In some cases, AI algorithms have been found to be more accurate for white patients than for black patients or for male patients than female patients, due to imbalance in the training data. However, researchers are currently working on solutions to compensate for such biases in the training data and to obtain AI tools that treat equally all patients independently of their sex, gender, age, or ethnicity.
Third, accountability in medical AI is complicated because there are so many actors involved. When AI misbehaves or the output is incorrect, who is responsible for the error? The clinician, the hospital, the AI manufacturer, the developers? This is currently unclear and that’s why transparency and traceability are so important in medical AI, to provide continuous information on how the AI tool is designed, developed, validated, and used in day-to-day practice.
Fourth, there are also privacy and security concerns. AI is prone to cyberattacks which could either affect their normal functioning or compromise data protection. That’s why it is important to build more robust computer systems and increase the layers of protection of AI tools.
Finally, lack of acceptance and trust is an important risk. Even if the performance of an AI system is high, reliable, secure, and unbiased, it may still not be accepted because clinicians and patients do not understand or trust the new AI technology. In this respect, increasing education on AI for both clinicians and the public, as well as involving them as stakeholders throughout the whole development process might increase AI acceptability and applicability.
We need to be aware of the risks associated with AI and put in place the right solutions and processes to minimise their implications for the patients and healthcare systems.
How inaccurate is the public perception of AI as human-like robots?
Karim Lekadir: Robotics is an important part of AI, including in healthcare, where AI-driven robots are being developed to assist vulnerable patients at home or help surgeons perform complex surgeries. However, this does not mean that fully autonomous robots will dominate future medical practice. Future AI will help, facilitate, and speed certain processes, but will not fully replace humans in the near future.