Uncertainty estimation for machine learning system self-assessment on medical conver-sations

Jakob Drachmann Havtorn: Uncertainty aware models and algorithms can accelerate widespread adoption of AI

Doubt is an uncomfortable position but certainty is an absurd one” - Voltaire, 1767

Artificial Intelligence (AI) systems provide unprecedented abilities, but a major drawback is their inability to act reliably outside of scenarios highly similar to those they learned from. Modern machine learning models require massive amounts of human-labelled data and suer from a tendency to overfit to their training data. Ultimately, they oer little to no performance guarantees when deployed on data not generated by the same process as the training distribution. On such out-of-distribution (OOD) inputs, a model may not only be wrong, but confidently so, greatly limiting the areas in which deployment of such models can be considered safe.

Therefore, we believe that enabling machine learning systems to reliably assess their own certainty will be a necessity for widespread adoption of machine learning in critical applications. This project will deal with equipping modern machine learning systems with a notion of uncertainty through research within generative modelling, information theory and representation learning as well as unsupervised and semi-supervised learning.

Among others, this project will deal with the following research questions:

  • To what extent are current generative modelling approaches capable of out-of-distribution detection and when are such estimates unreliable and why?
  • Which links exist between issues of out-of-distribution detection and the learned feature hierarchies and data representation?
  • How can issues with out-of-distribution detection be alleviated?
  • How can generative models with robust out-of-distribution detection abilities be applied to sequence data such as audio and text?
  • How can we leverage the advantages of semi-supervised learning in such models to simultaneously improve data efficiency and robustness to out-of-distribution inputs?

In the real world, well-calibrated uncertainty estimates yields improved robustness and will allow machine learning systems to enter into reliable service in a wide range of critical applications from medical triaging and radiology over self-driving vehicles to financial services and transaction validation.

Uncertainty awareness will help make more efficient use of resources in decision-making and make it clear when human intervention is required.

PhD project

By: Jakob Drachmann Havtorn

Section: Cognitive Systems

Principal supervisor: Jes Frellsen

Co-supervisors: Søren Hauberg, Ole Winther, Lars Maaløe 

Project titleUncertainty estimation for machine learning system self-assessment on medical conversations

Term: 01/09/2020 → 02/01/2024

Contact

Jakob Drachmann Havtorn
Industrial PhD
DTU Compute

Contact

Jes Frellsen
Associate Professor
DTU Compute
+45 45 25 39 23

Contact

Søren Feragen-Hauberg
Professor
DTU Compute
+45 45 25 38 99

Contact

Ole Winther
Professor
DTU Compute
+45 45 25 38 95

Contact

Lars Maaløe
Honorary Associate Professor
DTU Compute