Uncertainty Quantification for Deep Learning Segmentation Models of Anatomical Networks

Kilian Zepf: Uncertainty Quantification for Deep Learning Segmentation Models of Anatomical Networks

The use of machine learning systems in practical applications calls for a detailed understanding of what a model does not know. Today's deep learning algorithms are capable of complex mappings between high dimensional input data, such as images, and problem specific outputs. However, these mappings are often blackboxes, and the predictions made with such models are suggested to be highly certain. In practice, this can have far-reaching consequences when the predictions are trusted blindly, despite the fact that the model was maybe very unsure about its prediction.

Models applied in medical domains are objects of particular sensibility, since diagnosis and further treatments of a patient are possibly derived from their outputs. In this case, presenting only the most likely hypothesis without reliable uncertainty estimates can lead to delusive safety, causing serious mistakes in subsequent steps of medical treatment. Added information about the prediction’s uncertainty on the other hand gives for example the chance to resolve arising issues with experts’ knowledge.

For image segmentation tasks there is usually more than one comprehensible solution to a given problem. An example might be tissue that is clearly visible on a CT scan, but not clearly identifiable as malignant or not. Experts might disagree on this as well as on the boundaries of the tissue. A model, applied to this problem, should be capable of learning and presenting the disagreement in experts’ opinions, for example by learning a multi modal distribution of segmentations. It is therefore of great interest to combine the prediction strength from state of the art segmentation models with reliable uncertainty estimates.

In this context, it is common to distinguish the model uncertainty (epistemic uncertainty, derived from the Greek επιστημη (episteme), which means knowledge) that can be explained away with enough data from the noise inherent in the observed data (aleatoric uncertainty, derived from the latin alea, which means rolling a dice). This characterization of uncertainty into two types is built-in to Bayesian Machine Learning approaches, which have been transferred to deep learning models in computer vision in recent years. While the distinction between aleatoric and epistemic uncertainty in vision models provides a well derived framework, it can be still unclear what exactly is incorporated in which type of uncertainty given a task and a respective model.

In recent work, different suggestions for architectures that capture uncertainty in (medical) image segmentation have been made. This project aims to validate the state of the art approaches for uncertainty quantification by comparing and evaluating their behaviour in different contexts, shedding light on their properties and alleviating shortcomings by proposing novel model architectures. In particular we want to propose methods that are capable of modelling an even more fine grained decomposition of the associated uncertainties in medical image analysis.

PhD project

By: Kilian Zepf

Section: Visual Computing, DTU Compute

Principal supervisor: Aasa Feragen

Co-supervisor: Jes Frellsen

Project title: Uncertainty Quantification for Deep Learning Segmentation Models of Anatomical Networks

Term: 01/05/2021 → 30/04/2024

Contact

Aasa Feragen
Professor
DTU Compute
+45 26 22 04 98

Contact

Jes Frellsen
Associate Professor
DTU Compute
+45 45 25 39 23

Contact

Kilian Maurus Zepf
PhD student
DTU Compute