Active Learning for 3D Medical Image Segmentation

Sophia Bardenfleth: We are getting older – so how do we make sure, that our bones can keep up? Finding ways to do fast and robust 3D image segmentation can help doctors treat osteoporosis patients

172.000 people in Denmark are diagnosed with osteoporosis and 2 to 3 times as many people could be living with the condition unknowingly (Sundhedsstyrelsen, 2019). It is often not detected until the person suffers a fracture caused by a minor fall. This happens more frequently in the older population, since bone density decreases with age, which makes the bones more brittle. In some cases, the bone density becomes so low, that we classify it as osteoporosis. Detecting this condition early might prevent fractures and hospitalizations.

The current way of detecting osteoporosis involves analyzing the microstructure of the bones from a CT scan, which is a 3D image produced by combining multiple x-ray images. Once these scans have been taken, the bone matter needs to be separated – or segmented – from the rest of the scan in order to perform microstructure analysis. We have methods for automating this segmentation process, but there is a problem in the field of medical image analysis: the methods need a lot of training data, that is simply too expensive to obtain!

In order to perform segmentations of 3D images, we have to train a model – typically a deep neural network – to automatically segment new images. This model first needs to see a lot of training examples, so that it can learn how to do the segmentation correctly. For this, doctors or other health professionals often have to spend a lot of time annotating data – meaning they manually segment the region of interest in many CT scans, in order to obtain the needed training data. This manual segmentation process is time consuming, and some segmentations might not give a lot of new information to the model being trained.

Therefore, a new approach to training the model is to only have the medical professionals annotate the data that will improve the performance of the model, called active learning (AL). This is typically done by searching for the images that the model is uncertain about how to label. How to best measure this uncertainty is a topic of current research and some methods also include measures of how similar a new image is to already labelled images, in order to make sure that the model is trained on many different looking images.

During my PhD I will investigate different methods for doing active learning to improve state-of-the-art approaches to 3D medical image segmentation tasks.

PhD project

By: Sophia Bardenfleth

Section: Visual Computing

Principal supervisor: Anders Bjorholm Dahl

Co-supervisors: Vedrana Andersen Dahl, Chiara Villa (KU)

Project title3D Image Segmentation

Term: 01/08/2021 → 29/10/2024


Anders Bjorholm Dahl
Professor, Head of Section
DTU Compute
+45 45 25 39 07


Vedrana Andersen Dahl
Associate Professor
DTU Compute
+45 27 35 98 81


Sophia Elizabeth Bardenfleth
PhD student
DTU Compute