Diagnosing with Data: Delights and Dilemmas of Medical Decision Models
Developments in Artificial Intelligence (AI) are moving incredibly fast and are becoming increasingly important. Its potential applications are also being explored in the medical world. For instance, can AI start playing a role in making medical decisions? What conditions must these kinds of ‘decision models’ meet, and what challenges and risks do we then encounter from a legal and ethical point of view? Computer scientist and professor Johan Kwisthout works on these and other questions from the Donders Institute, Radboud University in the Personalised care in Oncology consortium and the ELSA lab.
Johan Kwisthout is keynote speaker at the Eurotransplant Annual meeting 2025.
“I want to ’empower’ the patient to get precisely that information from the system that is needed to make a good decision.”
Professor Johan Kwisthout is working on technology for decision-support systems for clinicians. These so-called ‘Bayesian networks’ are smart mathematical white box AI models that help in medical decision-making because they can transparently calculate different scenarios based on data. These models include statistical knowledge and medical guidelines on treatments, such as in oncology.
“It all started just before the Corona pandemic. I co-supervised a student with a fellow researcher at IKNL who was also working on AI. We wanted to do more with this topic, and we founded this consortium with other collaborators. That may sound grand and compelling, but there was a lot of interest in it because basically everyone knows someone with cancer. When I first shared my AI knowledge with the medics, the collaboration quickly took off. Especially from my technical background, I can make valuable contributions.”
Medical AI Models
“We are investigating how AI can help in individual decision-making for radical treatments, especially in oncological care. The central issue is how to capture questions that arise in the clinic in such a ‘structure’. For example, we can use a model to calculate the probability of survival. We also look at important factors such as quality of life and how to model that. That can vary from patient to patient and requires a specific kind of calculation. This focus on quality of life – in addition to hard medical facts like survival rate – is new. What we develop here can help make medical trade-offs at the individual level. By optimizing treatment, it can be tailored and therefore be more cost-effective.”
Exploring boundaries
“The medical-ethical and legal implications from the perspective of users, patients and doctors are important. For example, an oncologist might ask about the impact of a particular treatment and can then interpret the model to consult a patient. This information can be used to better assess whether surgery is necessary or whether the risk of metastasis is so minimal that it does not justify the reduced quality of life. Importantly, we do not advise but offer support with well-founded information. And that is traceable in a way that makes legal sense. In that sense, this technology can be deployed in an authorized way.”
“I don’t see legal frameworks as a barrier to AI.”
Doctor or AI?
“We believe that the user of such a model – the doctor – should always be accountable for the decision. We need to consider the consequences of ‘handing over’ part of a medical decision to a system. The system must be given a ‘seal of approval’ so that it can be trusted. At the same time, the user needs to know how to handle it properly. We want to translate that combination into AI. This is not always obvious because with AI you work with data that includes unknown properties, and you can extract outcomes that you didn’t already know beforehand. That changes the question of at what point do you comply with quality control. Because what does accountability of this mean from a medical-ethical but also legal point of view since AI is different from, say, an MRI scanner that you do require the same thing from. Enforcement, setting standards and legal review are therefore of great importance. And even broader: what freedom should the industry be allowed to decide this for itself?”
Photo credits: NWO
White box AI and transplants
“With transplants, international legislation has a big role, and I can imagine that decision support is desirable here too. A lot of data and knowledge will already be available. I am curious about the more precise support demand in transplants and how we could deliver it by bundling all knowledge into a system that meets all European requirements.”
“I expect that our models can soon be translated to other domains where decisions can also literally make a difference!”
Rollout to other domains
“We are now in the phase of testing and authorization of our models. In time, we would also like to look at other medical domains such as decision support in ICU care or Alzheimer’s to support the trade-offs that need to be made. Decision support outside healthcare is also interesting such as ‘net control’ in the energy sector for example!”
Future
“My dream is using AI to help people make decisions that ensure an optimal quality of life for them. That means being able to provide information that is relevant for the patient to decide. That could be a social aspect, from still being able to ride a bike to other individual situations. You cannot then suffice with saying the chance of metastasis to the lymph nodes is 7% because what does that mean? At the same time, I issue a warning because we should not want AI to make decisions about people independently. Also, we should always ensure transparency and point out shortcomings to the patient. We must be very careful with moral ethics, and I think it is right that there are big legal safeguards in it. Then it could contribute to a better life! Perhaps a bit a-typical for an AI researcher…. I see legal frameworks and collaborating with experts in other disciplines as an opportunity for AI. That actually ensures reliable safety just like in aviation.”
“I think it is important that humans remain in control. AI should support and not dominate or force.”
Photo credits: NWO
Keynote lecture 25 September Annual meeting
“With my lecture, “Developing and Implementing AI-Based Decision Support Systems in Clinical Practice: Opportunities, Regulatory Challenges, and Legal Considerations”, I hope to provide inspiration and insights. I am also looking forward to listening to others and am curious about Eurotransplant’s specific domain. What issues are there where AI could have a decision-support role?”
Don’t miss it!