Bax Lindhardt

Machine Learning Helps Diagnose Inflammation of the Middle Ear

Tuesday 21 Sep 21

Contact

Artificial intelligence, machine learning, and deep learning

The Danish Council of Ethics defines artificial intelligence—often simply called AI (Artificial Intelligence)—as machines “that are capable of considering, learning, and making decisions on the same level as a human being”.

Artificial intelligence is an overall term covering multiple methods.

One of the methods is machine learning, and the latest and most advanced use of machine learning is called deep learning.

Deep learning is based on neural networks, which is a mathematical model where the model itself can learn to classify, for example, images based on a given data set. Because data are used, it is called a data-driven model.

Through a training process, the neural network learns how data are to be analysed.

In the training process, the neural network is presented to all images in the training data set and attempts to classify each individual image.

By comparing the output from the network with ‘ground truth’ (here Dr. Kamide’s diagnosis of each patient), the model has the opportunity to improve the classification of the images that are incorrect.

Using the many repetitions, the network learns which data patterns can be used to classify the images correctly.

Once the model has been trained, it can be used to make predictions on new data that the model has not used for training.

PhD researcher will make it easier for doctors to make correct diagnoses when children have earaches to avoid hearing loss and unnecessary use of antimicrobial agents.

When my son was little, he suddenly developed a high fever and whined constantly. Our family doctor examined his ears briefly with an otoscope and prescribed antimicrobial agents against inflammation of the middle ear. The little guy quickly got well again.

Maybe he would have recovered without the medicine, because it can be difficult to interpret based on the symptoms whether inflammation of the middle ear is due to bacteria and needs to be treated or will go away on its own. In future, doctors will be able to take a picture with the otoscope and automatically get help to make a quick and secure diagnosis.

Josefine Vilsbøll Sundgaard is a PhD student at DTU Compute and uses the latest research in machine learning - known as deep learning - for image analysis of otoscopy images, so that a mathematical model can determine whether treatment is needed based on eardrum characteristics. The model is just as good as the best ear specialists and much better than doctors who only occasionally treat children with ear pain.

1,336 images of eardrums

The project is a collaboration with Interacoustics Research Unit, which is located at DTU and forms part of the company Interacoustics - a world leader in audiological equipment.

One of the world’s leading ear specialists - Dr. Yosuke Kamide from Japan - has lent Josefine Vilsbøll Sundgaard 1,336 images of eardrums, marking the images with whether they show healthy ears, treatment-requiring or non-treatment-requiring inflammation of the middle ear.

The smart thing about deep learning is that it requires no input other than training data (images and diagnosis for each image). The mathematical model then itself learns to identify patterns in the data set, without requiring a definition of where it needs to look for these patterns. And - in connection with errors - the model runs the images ‘backwards’ in its network and examines the things that it has misinterpreted.

“It may be that the model has found other characteristics in the images than those the doctors see or use, and has linked them together. And -  in this way - the model has learnt to make diagnoses based on pattern recognition,” says Josefine Vilsbøll Sundgaard.

Finally, Josefine Vilsbøll Sundgaard has examined whether the model is able to make an automatic diagnosis based on new images that it has not seen before. It is. The model makes a correct diagnosis for 85 per cent of the images. This is much better than, for example, paediatricians and on a par with the best ear specialists’ diagnoses.

Previous studies have shown that ordinary paediatricians only make the right diagnosis in 50 per cent of the patients, GPs are correct in 67 per cent of the cases, while ear, nose and throat specialists make correct assessments in 75 per cent of the patients.

DTU and Interacoustics are not the first to use deep learning for analysis of images of eardrums to diagnose ear problems. But Josefine Vilsbøll Sundgaard’s result is so good that it has been published in the world’s leading journal in medical application of image analysis: ‘Medical Image Analysis’/Elsevier

Proof of concept

Josefine Vilsbøll Sundgaard is still a year from completion of her PhD project, and she is now working on data from Interacoustics’ instruments.

“We can see that the model has difficulty making the right diagnoses if there is a lot of earwax on the eardrum, or the images are blurry, thus hiding important characteristics. It is a weakness that we would like to improve. By looking at other types of data from Interacoustics, we can hopefully improve the diagnosis of inflammation of the middle ear,” says Josefine Vilsbøll Sundgaard.

Deep learning is a relatively new field that is developing extremely quickly, and the development has made it possible to automate many analysis tools. As this research field becomes increasingly widespread, the methods are used in more and more places in practice. Today, deep learning is used for image analysis and as a help in diagnosing, for example, cancer diseases.

According to Josefine Vilsbøll Sundgaard’s supervisor - Associate Professor Rasmus R. Paulsen - the research confirms that deep learning is very suitable for distinguishing between variants of inflammation of the middle ear.

“Even people who aren’t doctors can see from the images that something is wrong if the eardrum is irritated, red, and swollen, and perhaps there is liquid in the ear. But if doctors use advanced image analysis methods, they have a greater chance of helping patients in the grey area correctly and avoiding excessive treatment with antimicrobial agents,” says Rasmus R. Paulsen.

DTU performs a research-based proof of concept in the PhD project. It is then up to Interacoustics to apply the new knowledge, for example by integrating deep learning models in the software of the digital cameras used in otoscopes.

Promising result

James Harte - Head of Research at Interacoustics - calls the research result extremely promising and looks forward to seeing how efficient and sensitive the method can become through further improvements. For competitive reasons, he will not comment on how Interacoustics will specifically apply deep learning methods in the coming years.

“But it’s clear to us in Interacoustics that deep learning can be used in diagnosing diseases that are well described with images and other data. I also see a tendency towards the use of such data not only in devices, but also in cloud-based solutions, and this creates an opportunity to develop better solutions for our customers,” says James Harte. 

 

News and filters

Get updated on news that match your filter.