*The image is taken from Burgos-Artizzu, X.P., Coronado-Gutiérrez, D., Valenzuela-Alcaraz, B. et al. Evaluation of deep convolutional neural networks for automatic classification of common maternal fetal ultrasound planes. Sci Rep 10, 10200 (2020).

Learning to Collaborate via Explainable AI in Medical Education

Manxi Lin: Learning to Collaborate via Explainable AI in Medical Education

Every minute, over 150 newborns bring hope for the future to their families. Infants’ health is always a vital concern of parents and doctors. Evidence shows that abnormal fetal growth in both industrialized and developing countries is one of the leading causes of perinatal morbidity and mortality.

Ultrasound scanning is widely employed in the prenatal examination. It provides a safe and convenient way to detect early fetal abnormalities, due to its low cost, real-time capability, and absence of harmful radiation. During the examination, clinicians manipulate the ultrasound probe to achieve certain standardized scan planes. However, obtaining high-quality ultrasound images is very challenging and depends on operator experience. The risk of diagnostic failure increases when ultrasound examinations are of insufficient quality. This is emphasized by the high variance due to different operators, devices, and acoustic windows. As a result, even experienced experts take a long time to find and assess standard scan planes. For those inexperienced young clinicians, working with minimal expert supepvision and support increases the risk of medical errors. In the Western World, medical errors are only exceeded by cancer and heart diseases in the number of fatalities caused. About one in ten diagnoses is estimated to be wrong, resulting in inadequate and even harmful care.

Recently, many works have explored the possibility of processing ultrasound data with deep neural networks (DNNs). Trained DNNs are faster and far more reproducible than humans, giving potential for standardized quality of care and more efficient use of clinicians’ time. However, in these existing works, DNNs are rarely designed as a collaborator for the healthcare professionals, but rather as a mechanical substitute for part of a diagnostic workflow - researchers only develop models to beat state-of-the-art on narrow performance parameters. As a result, clinicians do not always perceive these solutions as helpful in solving their clinical tasks, as they only solve part of the problem sufficiently well and face the risk of overfitting. To address these problems, we need not only to provide interpretability in the form of explainable models -- but we also need to provide models whose explanations are easy to understand and utilize during the clinician’s workflow. Put simply, we need to provide good explanations.

This project aims to address these problems:

- We will develop explainable models based on deep learning, which recognize standard planes.

- We will develop a system that provides optimized, continual feedback on improving image quality. This system will be trained to optimize the quality of explanations as well.

- We will provide an additional feedback parameter to the clinician, that detects whether the image is out of distribution, or whether demographics put the patient at risk of suboptimal algorithmic performance.

PhD project

By: Manxi Lin

Section: Visual Computing

Principal supervisor: Aasa Feragen

Co-supervisors: Anders Nymark Christensen, Martin Grønnebæk Tolsgaard

Project title: Learning to Collaborate via Explainable AI in Medical Education

Term: 01/11/2021 → 31/10/2024

Contact

Manxi Lin
PhD student
DTU Compute
+45 52 64 41 28

Contact

Aasa Feragen-Hauberg
Professor
DTU Compute
+45 26 22 04 98

Contact

Anders Nymark Christensen
Associate Professor
DTU Compute
+45 20 88 57 62

Contact

Martin Grønnebæk Tolsgaard
Researcher
DTU Compute