Deep Learning

DTU researchers aim to make AI less overconfident

With two grants from the Independent Research Fund Denmark, Jes Frellsen and Søren Hauberg from DTU Compute hope to take an important step towards making artificial intelligence more transparent and responsible. Their goal is to develop methods that can incorporate and explain uncertainty in AI models – a fundamental capability that is essential in a society where AI-based solutions are advancing rapidly.

Associate Professor Jes Frellsen (left) and Professor Søren Hauberg, both from the section Cognitive Systems at DTU Compute
Associate Professor Jes Frellsen (left) and Professor Søren Hauberg, both from the section Cognitive Systems at DTU Compute.

Facts

Since 2018, the Independent Research Fund Denmark (DFF) has awarded grants under politically prioritised thematic calls, financed through annual political agreements on the allocation of the Danish Research Reserve.

The thematic instruments are open to applications from all scientific disciplines that can contribute relevant knowledge to the theme.

Thematic research – as politically prioritised – serves as a supplement to the fund’s free and independent research funding based on researchers’ own curiosity-driven ideas.

The two projects mentioned above are among eight projects funded by DFF:

  • Reclaiming Uncertainty: Robust Bayesian Deep Learning by Handling Overparameterisation
  • Principal Investigator: Jes Frellsen
  • Co-leader: Søren Hauberg, DTU | DTU Compute
  • Granted amount: DKK 7,192,826
  • Conveying Caution & Confidence: Quantification and Communication of Uncertainty in Large Language Models
  • Principal Investigator: Christian Hardmeier, IT University of Copenhagen
  • Co-leader: Jes Frellsen, DTU Compute
  • Granted amount: DKK 7,183,820

Read more at DFF's homepage