AI to help doctors treat deafness

Tuesday 18 Jan 22

Contact

Paula Lopez Diez
PhD student
DTU Compute

Contact

Rasmus Reinhold Paulsen
Professor
DTU Compute
+45 45 25 34 23

Theme on health technology

Since 2010, the number of engineers in the healthcare system has increased by 22 percent, so that in 2019, 553 engineers were directly employed in the healthcare system. In a theme on health technology, DTU writes about developments in areas such as medical imaging technology, artificial intelligence and sensors, and portable equipment. Technology that supports doctors creates opportunities for faster diagnosis and treatment and increases quality.'

Image analysis tools can make diagnosis and planning more efficient for implantation of the advanced hearing aid cochlear implant.

One out of five patients with hearing loss, severe hearing impairment or who were born deaf, have deformations in the inner ear, and could benefit from having an advanced hearing aid known as a cochlear implant (CI) implanted. Examinations of the labyrinthine structure of the inner ear are made by CT scan, although interpretation of the images is very difficult, and can delay or completely rule out the treatment.

DTU PhD student Paula López Diez is studying how artificial intelligence (AI) can be used for image analysis. AI will make it faster for doctors to determine whether an implant will be suitable.

“It's highly motivational to see what CI means for patients, and to know that my research will be able to help them. AI is now used in every way possible, and will become even more widespread, because it works. We should prioritise the use of AI research within medical science, in order to improve diagnosis and treat patients earlier,” says Paula.

Software to help with complicated operations
A cochlear implant consists of exterior and interior parts. Externally, an audio processor placed behind the ear picks up sound, which it digitalises and transfers via a transmitter located on the skull to an implant receiver just under the skin. The receiver converts the digital data to an electronic signal, which is then sent to a series of electrodes implanted in the cochlea. The electrodes stimulate the auditory nerve, enabling the brain to perceive sound.

The actual operation is complicated, as surgeons must drill a hole in the skull in order to implant the receiver and electrodes, but if they touch a facial nerve or an electrode is too close to a facial nerve, the patient could experience paralysis, tics and pain. And the implant will not work if the electrode is placed incorrectly in relation to deformations. It therefore requires a high degree of precision, and the surgeons need detailed knowledge of deformations in the ear.

“The objective is to develop software to which doctors upload the CT scans, and which is able to automatically identify the different types of typical deformations in the structure of the inner ear. The software should potentially be developed to provide an assessment of whether a cochlear implant would work, and indicate to the surgeon how, and from which angle, to perform the operation,” says Paula.

"AI is now used in every way possible, and will become even more widespread, because it works. We should prioritise the use of AI research within medical science, in order to improve diagnosis and treat patients earlier."
DTU PhD student Paula López Diez

“Many ENT doctors are unaware of deformations in the ear, as there are so few patients who have them. And no two deformations are identical, which means the algorithm will also be able to make a difference in this area,” she adds.

Paula is employed in the research section Visual Computing at DTU Compute, where the scientists are experts in using Artificial Intelligence such as machine and deep learning in image analyses. Using her knowledge of the ear’s structure, Paula trains her mathematical model to identify the special characteristics in CT scan images.

She recently published her first research results along with colleagues from DTU and the partner in the PhD project, Oticon Medical. The results confirm that AI makes it possible to identify the auditory nerve and facial nerves in scanned images.

AI trains on data from Russian patients
Due to the fact that there is limited data available for patients with deformation of the inner ear, DTU uses data from the National Medical Research Center for Otorhinolaryngology of the Federal Medico-Biological Agency of Russia in Moscow. Oticon Medical already collaborates with the head of the hospital, professor Khassan Diab, who is a specialist in the field.

Collaboration with Russian research institutes is a first for the DTU department. According to associate professor and PhD counsellor Rasmus Paulsen, DTU now has the opportunity to train its mathematical models on a unique dataset.

“It’s extremely important to have access to data that reflects the enormous variation in patients with deformation of the inner ear, as well as to receive expert input for the optimum surgical procedure to be able to plan operations. Khassan Diab’s research centre covers a huge geographical area, and therefore his clinic treats many more patients than most others. It’s invaluable to have access to this data along with clinically relevant parameters, such as the surgical techniques and results of implant operations.”

Paula will be visiting the hospital in Moscow during the spring to experience how the doctors work on a daily basis and see how they generate data.

“I’m an engineer and mathematician who works with mathematical models, and have never studied anatomy. Of course I have considerable knowledge of the ear, but it’s always good to be able to get out and gain an insight into what doctors need, and how our algorithms can help.”

Oticon Medical: Innovation for patients
Oticon Medical has high expectations for the PhD project, funded by the William Demant Foundation, explains co-counsellor François Patou, whose position as Senior Translational Research Manager involves deploying scientific know-how into practice more rapidly:

“The collaboration between DTU professor Diab and Oticon Medical represents a big step towards safer cochlear implant treatment in the most challenging cases: infants born with deformations. For Oticon Medical, the project is a springboard for using automated image analyses for CI treatment, and supports the company’s objective of allowing innovation to be beneficial for all CI users.”

Artificial Intelligence, machine learning and deep learning

  • When computer programmers can do something ‘smart’, its called ‘Artificial Intelligence’ or simply ‘AI’. AI is therefore an umbrella term for a range of different methods.
  • One of these methods is machine learning, and the newest and most advanced use of machine learning is called ‘deep learning’.
  • Deep learning is based on a neural network, which is a mathematical model capable of learning how to classify things itself, such as images based on a given dataset and without the use of direct programming. It is called a ‘data-driven model’ as it uses data.
  • The neural network is presented with data/images in a training set (in this instance, around 70 patients from the National Medical Research Center for Otorhinolaryngology in Moscow) by means of a training process, and subsequently attempts to classify each image.
  • By comparing the output from the network with ‘ground truth’ (the actual diagnoses from the hospital), the model is able to improve the classification of incorrect images.
  • By constant repetition, the network learns which data patterns can be used to classify the images correctly.
  • Once the model is trained, it is then tested on unknown data (in this instance, 30 patients from the Moscow hospital) to ensure that it works.

News and filters

Get updated on news that match your filter.