zuje

Embedded neural networks in resource-constrained hearing instruments

Zuzana Jelčicová:  Making hearing instruments intelligent

                 …with help of embedded neural networks

Over 5% of the world’s population has hearing loss, and it is estimated that by 2050 even one in every ten people will have disabling hearing loss. One of the main consequences of hearing loss is on the individual’s ability to communicate, and exclusion from communication can have a significant impact on everyday life, causing feelings of loneliness, isolation, and frustration. However, the majority of adults who would benefit from a hearing aid, do not use them, and many people given a hearing aid do not wear it saying, “It does not work for me…”.

 

The state-of-the-art hearing instruments have many automatic features such as noise reduction, environment classification, and adjustments of the hearing instrument settings. The problem is though that they are still pre-programmed for several different situations and cannot immediately adapt to the specific scenario. This is an issue as in real life, the environment changes constantly. However, adaptive algorithms in hearing instruments lack learning and cannot improve behavior over time in response to sensor information. In order to overcome this problem, we have to provide hearing instruments with an ability to learn, by using deep neural networks (DNNs), so that they can become intelligent. Introducing such powerful capabilities directly in a hearing instrument would open a vast spectrum of options.

 

Current DNNs are able to surpass humans in many artificial intelligence tasks, but this supremacy comes at a significant cost of resources. Therefore, they are typically realized on high-performance servers (cloud) due to their computational complexity and size. The results are then deployed wirelessly to lower-complexity power-constrained devices such as hearing instruments. However, sharing data with the cloud is not desirable due to security, privacy, as well as latency and connectivity issues. Therefore, designing efficient hardware architectures and deep learning algorithms for execution of DNNs running locally on hearing instruments is crucial. Furthermore, with an ability to learn, audio data could be combined with other sensory information, thus creating a neural network (NN) based sensor fusion platform for the development of features based on sensors such as a gyroscope and a thermostat.

 

However, hearing instruments have many limiting factors such as area costs, memory footprint, power budget, and throughput, which makes embedding NNs in hearing instruments a challenging task. Not only the hearing instrument needs to be always on, but it must execute complex DSP algorithms. Adding NN processing thus introduces additional workload that must also be handled. Efficient implementation of DNNs has therefore become of prime importance under these constraints. Although research exists on embedded DNNs, it has never been realized in highly constrained devices such as hearing instruments.

 

We hypothesize that such efficient NNs capable of performing complex computations are viable to be designed and embedded directly in a hearing instrument, while still delivering high accuracy and performance. In fact, we hypothesize, that we are able to identify suitable NN topologies and their efficient hardware implementation for the chosen use cases.

 

Therefore, successful completion of this research will pave the way for a highly personalized and human friendly user experience for hearing-instrument users and health care professionals.

  

 

PhD project

By: Zuzana Jelčicová

Section: Embedded Systems

Principal supervisor: Jens Sparsø

Co-supervisors: Lars Kai Hansen, Evangelina Kasapaki & Anders Hebsgaard

Project title: Embedded neural networks in ressource-constrained hearing instruments

Term: 01/09/2019 → 31/08/2022

Contact

Jens Sparsø
Emeritus
DTU Compute
+45 45 25 37 47

Contact

Lars Kai Hansen
Professor, head of section
DTU Compute
+45 45 25 38 89