Dr. Wannes Van Hoof PhD works at the Cancer Centre of the Belgian Institute of Public Health Sciensano. There, he leads a team working on patient/citizen engagement and the ethical aspects of the secondary use of health data, innovative cancer therapies and the implementation of genomic technologies in healthcare.
As artificial intelligence (AI) becomes increasingly integrated into healthcare systems, concerns about data privacy, algorithmic bias, and transparency have surfaced. In your opinion, what strategies or frameworks can be implemented to address these ethical challenges and ensure responsible AI adoption in public health?
Wannes Van Hoof: Generally, AI can’t function without the input of huge datasets, often involving personal data of citizens and patients. Notably, when these datasets are sufficiently aggregated and fully anonymised, however, the General Data Protection Regulation (GDPR) does not provide legal protection for these people.
In essence, their data can be used for purposes that are not supported by the data subjects (e.g. to discriminate against them, for purely commercial interests, etc).
I think we should develop frameworks of collective decision-making based on methods from deliberative democracy to guide AI development and implementation based upon our data.
In essence, this is a question about how we want to be treated by an AI looking at parts of our lives and those rules need to be established by citizens themselves. An example of what that could look like can be found in this recent report of the joint action “Towards the European Health Data Space” (TEHDAS). It summarises people’s views on how their health data could be used in the future for secondary purposes.
Within the European Union’s healthcare landscape, how do you perceive the intersection of artificial intelligence and ethics?
Wannes Van Hoof: I mainly agree with the principles established by the EU and the framework developed in the AI Act. While there is a lot of red tape, I think we’re doing a good job of not letting a technological imperative guide us, but taking our time to establish a value framework. For example this European approach to artificial intelligence.
Success stories in AI-driven initiatives within the European Union highlight the potential of technology to transform public health. However, they also underscore the importance of ethical oversight and accountability. From your perspective, how can EU policymakers and healthcare professionals collaborate to establish robust regulatory frameworks that balance innovation with ethical considerations in AI-driven healthcare solutions?
Wannes Van Hoof: I think the way to collaborate is through empowering local structures (e.g. ethics commissions, patient review boards, etc.) and through ethical funding for EU research/implementation projects, a major driver of progress in Europe. In that sense, it would be a good practice to involve citizens/patients in priority setting for funding programs.
In your experience, what role does interdisciplinary collaboration play in navigating the ethical landscapes of AI in healthcare? How can stakeholders from diverse fields, including ethicists, data scientists, policymakers, and healthcare professionals, work together to foster ethical AI development and promote the well-being of patients and communities?
Wannes Van Hoof: There is a real danger in letting one discipline or one group of stakeholders determine the ethical framework for the development of AI technologies. Everyone has different experiences, expertise, incentives and interests informing their perspective. There is no formula to weigh economic benefits with societal values.
