We are moving towards a society where an increasing number of decisions are made by artificial intelligence – from hiring new staff to autonomous vehicles and clinical decisions in operating theatres.
With support from the Independent Research Fund Denmark (DFF), Associate Professor Jes Frellsen and Professor Søren Hauberg from DTU Compute aim, over the next five years, to make AI more transparent and accountable.
“A major problem with current AI models is that they are far too overconfident. Essentially, something is either completely right or completely wrong. AI does not reflect well on uncertainty – why, or how close it is to complete failure or complete success – which can be problematic in critical application,” explains Principal Investigator Jes Frellsen.
“If, on the other hand, a model can say: I believe this, but I am not entirely certain, the user can act more appropriately. It is about transparency and responsible use of AI. This is a fundamental property we need to master in a society where AI-based solutions are accelerating rapidly.”
The researchers and their team will develop methods that can quantify and explain uncertainty in AI models. The Independent Research Fund Denmark has awarded approximately seven million Danish kroner to the methodological development project.
Jes Frellsen is also Co-leader in another five-year AI project funded by the same foundation and led by the IT University of Copenhagen (ITU), which focuses on the use of language models in healthcare. Here, the challenge is how models can communicate uncertainty in a way that both patients and healthcare professionals understand.
While the projects share concept, they approach the problem from very different angles.
New Mathematics for Complex Models
In the methodological project, the researchers aim to make AI more reliable by combining deep neural networks – models that identify patterns in complex data – with a century-old mathematical approach that enables the calculation of uncertainty in predictions.
However, neural networks are so complex and full of symmetries that this classical approach cannot be applied as it stands.
Jes Frellsen and Søren Hauberg therefore intend to develop new mathematical and algorithmic tools that take the networks’ structure and symmetries into account, thereby achieving more robust uncertainty estimates.
Failure to express uncertainty hinders innovation
Uncertainty in AI models is not only a technical issue; it can have drastic consequences if decisions are made based on incorrect conclusions. It also hampers innovation.
Jes Frellsen and Søren Hauberg apply deep learning to protein sequences, which are widely used in industry, including pharmaceuticals.
“In the pharmaceutical industry, AI can predict aspects of how a protein-based drug might behave in the human body. But if the model only answers ‘success’ or ‘failure’, it is of little practical value in the drug development process. These are concrete challenges that our partners want us to help solve so that they can develop better drugs,” says Jes Frellsen.
Uncertainty about meaning and phrasing in language models
The language-model project, led by Christian Hardmeier at ITU, focuses on uncertainty in large language models in healthcare applications and on separating the uncertainty about a statement’s meaning and phrasing.
Through experiments, researchers will also examine how people interpret uncertainty and how it should be communicated.
“Current AI chatbots can quickly answer almost any question, including medical ones, and the phrasing usually sounds very convincing. But how can the user know when the answer is correct?” says Jes Frellsen.
“It is fundamentally about transparency. We know that AI models make mistakes, and they must be able to calculate and clearly express that uncertainty so users can understand it.”
Important for Denmark to take the lead
Globally, only a few researchers work on uncertainty quantification in AI. At the European level, Danish, British, German, and Finnish researchers are among the leaders.
“You can hardly avoid AI models today. The trend is towards automating more and more processes, so it is crucial that we, as a society, take this issue seriously,” says Co-leader for the DTU Compute project Søren Hauberg.
“This underlines how important it is for Danish researchers to lead the way in making AI more responsible.”
Both of the DFF-funded projects begin in March 2026.