Uncertainty and robustness in deep learning

Anshuk Uppal: Reliable artificial intelligence achievable through Bayesian Deep Learning

Automated systems powered by machine learning algorithms have become increasingly pervasive. Such systems can learn patterns found in the real world and make decisions relying on these learnt patterns. Machine Learning researchers have been probing the foundations of these algorithms to make them robust to noise present in the real world, making them more humane.

Do you wish to own a self-driving car in the future? Are you unsure because you don’t trust the car to make decisions on the road?

Self-driving cars are an impressive feat of technology. They predict the position of the car relative to the road, traffic and road signs and make decisions about driving, all in real-time. These predictions and decision-making needs to be scrupulous to avoid accidents and fatalities. Judging, the separation between two lanes, for example, is very trivial to us but can be very difficult for such a driving system.

Deep neural networks lie at the heart of making predictions from videos and images in real-time. A self-driving car has a myriad of cameras focused on different sides and angles to capture rich information about the road conditions at every instant. This information is mixed in several ways to “automate” driving. Neural networks excel at several similar tasks like recognising speech and translating languages, but these excellent benchmark results hide the unsatisfactory reality.

These systems are often unsure about their decisions. Recent research has established the harms caused when applied in critical domains. For instance, what if a self-driving car cannot distinguish the demarcations between a road and a sidewalk? What if it starts driving on the sidewalk because of an inaccurate prediction? Even though these systems report state of the art accuracies, they are trained in controlled environments with limited data. Hence, these systems need to know what they don’t know!

So should one abstain from adopting such automation completely? No, because there’s a way to solve this problem – by making these algorithms more like ourselves!

Human beings usually resolve mundane dilemmas by thoughtful scrutiny. This requires contemplating the consequences of our actions and what someone else would’ve done in a similar situation. Asking an autonomous system about its fallacies is the same as questioning a human about its ambiguity. We present plenty of distinct but related questions ascertaining its boundaries and the failure cases. So in the case of the self-driving car, showing the systems multiple images of sidewalks and roads, from different highways, regions and countries, new and old reveals its uncertainty. In mathematical terms, this constructs a probability distribution. And using probability distributions we can build an automated system that discloses its secrets.

By applying basic rules of probability like the Bayes’ theorem we can know when a model of the real-world inside an autonomous system may fail. Even better, we can make such systems report their confidence in predictions and subsequent decisions in each situation. The goal is to design probabilistic systems that harness randomness to generalize memorized patterns.

Combining Bayes’ theorem with neural networks is a new frontier in machine learning research. These algorithms can give rise to reliable systems that demand human intervention when they encounter something unusual. Such human in the loop systems could be very effective at diagnosing diseases and prescribing medication, prompting expert opinion whenever unsure, efficiently utilising the expert’s time and filling the talent gap.

PhD project

By: Anshuk Uppal

Section: Cognitive Systems

Principal supervisorJes Frellsen

Co-supervisor: Wouter Boomsma, DIKU

Project titleUncertainty and robustness in deep learning

Term: 15/08/2021 → 14/08/2024

Contact

Anshuk Uppal
PhD student
DTU Compute

Contact

Jes Frellsen
Associate Professor
DTU Compute
+45 45 25 39 23

Contact

Wouter Boomsma
Guest
DTU Compute