Personalizing audiology by learning behavioral graphs based on user centred AI

Tiberiu-Ioan Szatmari: Contextual User Personalization – How to Use Deep Learning Without Giving Away Your Data

According to the World Health Organization (WHO), disabling hearing loss is a global phenomenon currently affecting over 5% of the world’s population, or 466 million people, out of which 34 million are children. Recent estimates predict that over 900 million people - or one in ten - will have disabling hearing loss by 2050. This type of ailment impacts human life in several ways, from functional and social, to economic potential. Personalized healthcare is undergoing a digital revolution, shaping the future of hearing services and adapting to the upcoming needs of the population.

Hearing instruments are designed as autonomous devices, which depending on the environment amplify sounds like voices while attenuating noise based on fixed threshold values. Unfortunately, these threshold values are rarely personalized due to the lack of clinical resources in hearing healthcare, although patients are known to exhibit differences of up to 15 dB in ability to understand speech in noise. Internet of things connected hearing instruments represent a paradigm shift, as it becomes feasible to dynamically personalize settings in real world listening scenarios using AI.

As hearing healthcare medical data is of a highly private nature, the challenge remains that this level of personalization could require individually generated training data on a scale which may not be realistic.

A collaborative approach for scaling AI model training across multiple users and combining personalized models into one using federated deep learning might provide a solution.

Traditional e-commerce recommender systems typically suggest streaming a new movie based on the similarity of features highly rated by other users. Thus, personalizing hearing instruments might be possible by securely sharing preferences learned from multiple users. However, due to sparsity of data this requires applying AI methods which augment available data with relations of similarity. Likewise, to predict how preferences evolve depending on user state and context during the day, requires combining recommender systems with long-term reward maximization techniques.

The aim of the project is to develop new data-driven, deep learning methods for understanding individual differences in hearing preferences. The goal is to provide solutions capable of personalizing and adapting hearing instruments to dynamic sound environments throughout the day, which current devices are unable to do.

Thus, a future user will gain an improved hearing experience at home, in the office or at a restaurant, while at the same time helping other similar users, without having to give up their private data.

PhD project

By: Tiberiu-Ioan Szatmari

Section: Cognitive Systems

Principal supervisor: Jakob Eg Larsen

Co-supervisors: Niels Henrik Pontoppidan, Kang Sun

Project title: Personalizing audiology by learning behavioral graphs based on user centred AI

Term: 15/10/2020 → 14/10/2023

Contact

Tiberiu-Ioan Szatmari
Industrial PhD
DTU Compute

Contact

Jakob Eg Larsen
Associate Professor
DTU Compute
+45 45 25 52 65