Making sense from senses - where do brains integrate audio and visual cues?

Agata Wlaszczyk: Model-based functional Magnetic Resonance Imaging of neural processes underlying audio-visual integration of speech
At every moment of our lives, our brains receive a multitude of diversified pieces of information from all of our senses. Imagine a situation when we are talking to someone - the motion of their mouth, that we see, and the words they say, that we hear, are processed in different parts of the brain. Yet, instead of experiencing scattered fragments of individual senses and trying hard to make sense out of them, we actually perceive these cues as unified and are able to enjoy the conversation.
 
The binding of audio and visual cues, known as audio-visual integration, often works in our favor - each modality provides us with a separate type of information and combining them increases our performance in a variety of tasks. As it has a direct impact on our perception, there is a need to understand and explain the mechanisms of integration from a range of perspectives. Previous studies investigatingaudio-visual integration have proposed computational models of this process and confirmed them on experimental data from behavioral experiments. However, there is still a need to confront the model with the actual brain activations.
 
The goal of this project is to find the areas in the brain where information from auditory and visual modalities are integrated. This will be achieved by designing and performing an experiment with the fMRI registration of brain activity, which gives us the ability to identify structures active during the process of audio-visual integration. We will build the current investigation on previous experimental design and findings (Lindborg, Andersen, 2020) in order to tackle the audio-visual information binding from multiple perspectives in an organized and coherent manner. The results of the fMRI study will be further evaluated in terms of their compatibility with existing computational models of audio-visual integration.
 
By uncovering the details about how brains bind visual and audio information and linking them to accurate models, we acquire a better understanding of sensory processing in the brain. This knowledge may be used to create tools for assessment and evaluation of sensory integration disorders where the information coming from different senses is not processed as expected.

PhD project

By: Agata Wlaszczyk

Section: Cognitive Systems

Principal supervisor: Tobias Andersen

Co-supervisor: Kristoffer H. Madsen

Project title: Model-based functional Magnetic Resonance Imaging of neural processes underlying audio-visual integration of speech

Term: 01/09/2020 → 31/08/2023

Contact

Agata Wlaszczyk
PhD student
DTU Compute

Contact

Tobias Andersen
Associate professor
DTU Compute
+45 45 25 36 87

Contact

Kristoffer Hougaard Madsen
Associate Professor
DTU Compute
+45 45 25 38 95