Cilie Werner Feldager Hansen: Learning to trust artifical intelligence
Imagine a patient comes into a doctor’s office. The patient says ”I am not feeling well. I have a headache, I cough and I am exhausted but I have no fever”.
In such a case, an artificial intelligience (AI) might assist the doctor in diagnosing
the problem and the output from it would be ”the patient has the flu”. The AI
acts as a black box and provides no explanation of its reasoning: There is no
foundation for an informed decision based on the prediction. The doctor might
disagree and therefore reject the diagnosis, making the AI superfluous.
Even to research scientists AI is often a black box. Popular AI algortihms like
neural networks are very succesful in making predictions which is the reason
for its success. Like in the case with the doctor, the AI simply provide a
diagnosis but no explanation of how that prediction was reached. In contrast,
when it is not succesful, such algorithms tend to be very certain that they
are correct, thereby vastly underestimating the uncertainty on the prediction.
This raises the question how can we trust predictions made by an algorithm?
Unfortunately, most machine learning algorithms cannot provide explanations
and cannot be intepreted and therefore it is hard to trust the model.
The aim of my project is to gain a deeper understanding of AI in a certain
type of model. Specifically, I will explore underlying structures that can be
constructed any dataset. Such structures are capture the essense of data and
are often exploited in machine learning as it speeds up computations. However,
this structure is unlike anything we know and our usual intuition breaks down.
An example of this is that the shortest distance between points is no longer a
straight line. In this project, we will apply differential geometry with a Bayesian
approach to learn more about this structure and attempt to extract physical
traits from it.
The findings of this project could help us understand a technology that play a
larger and larger role in our everyday lives from art and music, practical speech
recognition, effective web search, fraud detection, efficient transportation to
human genome research, diagnosis of cancer and self-driving cars. Upon my
project’s completion, machine learning will be a little less black box and thereby
we will lay one brick in the long road to understanding AI.