Auditory research with NIRS: an interview with dr. Marc van Wanrooij

 
 

Could you please introduce yourself for the audience?

I am Marc van Wanrooij, researcher at the biophysics department of the Radboud University Nijmegen. In collaboration with the Radboud UMC, the ENT department (ear-nose-throat department) we are doing clinical research on hearing and the hearing-impaired. We are called the Hearing and Implants Lab of the Donders Institute for Brain, Cognition and Behaviour.

At the Hearing and Implants lab we are currently doing research on people with hearing loss, mostly on people with cochlear implants. A cochlear implant consists of several electrodes in the cochlea that are electrically stimulating the cochlea instead of in the common, acoustical way. Such an implant helps deaf people to hear again.

What are the core research topics that you and your group are working on?

We studied temporal cortical activation as represented by concentration changes of oxy- and deoxy-hemoglobin in four, easy-to-apply optical fNIRS channels. We analyzed 33 normal-hearing adult subjects and five post-lingual deaf cochlear implant (CI) users in response to supra-threshold unisensory auditory and visual, as well as to congruent auditory-visual speech stimuli.

This becomes interesting from the neuroscience point of view, because it is very hard to measure these people in neuroimaging methods like fMRI or EEG, because of the electromagnetic components of the devices. Noise (from functional MRI, fMRI) limits the usefulness in auditory experiments, and electromagnetic artifacts caused by electronic implants worn by subjects can severely distort the scans (EEG, fMRI). Therefore, we assessed audio-visual activation of temporal cortex with a silent, optical neuroimaging technique: functional near-infrared spectroscopy (fNIRS).

What we are studying is how these cochlear implants can help patients gain their ability to hear again and how we can improve upon these devices. By this we are also wondering how people perceive sounds. Our goal is to objectively measure what they are hearing. 

It is difficult for our patients to answer this question because some of them are actually children and babies. Non-invasive functional neuroimaging techniques can expose the neural processes underlying the integration of multisensory processes required for speech understanding in humans and might be the solution to get some objective measures.  

Back in 2005, the ENT department leased the OxyMon from Artinis, which proved to be so helpful that the department eventually bought the system. 

Now, 10 years later we have published a paper about temporal cortex activation to audiovisual speech in normal-hearing and cochlear implant users, measured with functional near-infrared spectroscopy.

Can you shortly summarize the findings of your latest paper?

We tried to find audiovisual integration in the temporal cortex by comparing normal-hearing with CI users. What you would typically see is that cochlear implant users tend to use more ‘visual’ information because they have poor access to auditory information, but on the other hand they also seem to be better integrators of sounds and images. 

We presented movies with or without the audio or video and asked the subjects to listen to the story and try to understand what was being told. 

What we found was actually not something we expected to find at first. 

Normally, if there is a sound being presented you will get an activation of the auditory cortex. If an image is presented there might also be some activation in other parts of the temporal cortex. If you present them both and there is no ‘real’ integration you might expect that these two responses just add to each other. But if there is true integration, if you use both outcomes in optimal fashion, you would expect that the response is even bigger than just the addition of these two responses. 

During the research we found saturation in the hemodynamic response to a strong audiovisual stimulus. This means it seemed like the sound alone evoked such a large response that it could not get any bigger even if the video was shown. 

Many researchers find audiovisual integration both in behavior and for example in single unit neurons of primates. So, at first it was quite discouraging to find this saturation effect. Because of the saturation we could not see whether there was some kind of audiovisual integration going on. 

That means that we do not know whether the response in the neurons actually do exhibit some superadditive integration. 

 
 

Were you expecting those results?

Yes and no. We hoped that saturation would not occur. In the literature we had studied and in fMRI papers, it is assumed that saturation is typically not that big of a problem. However, the stimuli that we used were very strong, or salient. If you would listen to them you could easily follow the story. In a sense even the lip movements were very easy to understand for some normal-hearing people, though they never had training in lip-reading. 

So, from a behavioral point of view you would imagine that because of the very clear auditory and visual information, there cannot be any benefit by further integrating these two. Even irrespective of the saturation of the hemodynamic response it could simply be that the stimuli were too informative by themselves. 

We also addressed in the paper that it might be a problem. That is also one of the reasons why in our upcoming study, we will use new stimuli and why we will degrade the information by presenting background noise. This will hopefully lead to more strong integrative effects. 

Why did you start using NIRS for your research and what were the strengths of using fNIRS for your research?

In 2005, the ENT department heard that NIRS was a potential way of objectively measuring performance of CI users. Dick Stegeman of the neurology department at the UMC Nijmegen said that there was a spin-off company of the Radboud University which provides these fNIRS systems. That is how we came into contact with Artinis and the OxyMon system.

The major advantages of using NIRS for our research are simply to being able to objectively measure CI users with NIRS. You cannot use MRI and EEG for this purpose. We did use PET studies before, but you cqn image that because of the radioactivity that was not an ideal way of studying patients. That is one of the reasons why we started using fNIRS. 

10 years ago there were not a lot of studies done yet where they used fNIRS. Around 2005 many researchers already were telling that fNIRS was ideally suited to study CI users, only no one ever did it back then. Just within the last few years the first five papers came out studying CI users with fNIRS. 

In the beginning, measuring with a NIRS system was not very easy for many reasons. Researchers still had to test whether it works, if it is valid, if it is repeatable, and only in the last 5-10 years researchers in the clinics who studied CI patients have been able to validate these measurements. 

Back then, I suggested to people to wait a while with using NIRS because it was a relatively new technique, but with all the interesting studies being done nowadays and people gaining more and more knowledge about this technique I think this is the perfect time to start using functional NIRS for your research.

Do you think there is enough experience and evidence in how to interpret NIRS now?

Yes, absolutely. That is also the reason why we bought a 48-channel NIRS system from you. One of the problems with our paper was that previously we only had 2-channels; a reference channel and a real deep channel. With only one channel, we did not know exactly where we should be measuring and therefore we could have missed interesting cortical areas simply because we did not used enough optodes. That is why we bought the 48-channel OxyMon, which we are very excited about. 

We are just starting with recording patients and it seems really easy to do. At first, when we had the 2-channel system, we always had to fiddle around and replacing the optodes till we found a signal. With the 48-channel system there is always one channel that picks up an interesting and relevant signal, so we are pleased with that. 

So shortly back to your latest paper, is there anyway we as a company in particular helped or supported you? 

We tried to do a lot ourselves, but you were always able to advise us if we had some questions. Really, one of the main advantages was that typically you guys came around and supported us on site. Besides the customer support, the meetings with the NIRS researchers in Nijmegen also helped a lot. 

One of the most important things we learned from Artinis was the reference channel technique. We had very big troubles at first to get some signals out of the data and could not find any particular reason for that, basically we only were picking up noise. In the paper we mentioned that without reference channels you will not see any signal because there is too much noise. 

By using reference channels subtraction, you will get good signals that are not visible in the raw data. This was one of your tips actually from your company and it helped us a lot.

That’s nice to hear, thanks. So back to you, what are your plans for follow-up studies? 

We are redoing the study by increasing the amount of CI-users. In our first study we only had 5, now we are going to include 15 to 20 CI users. We also use the 48-channel OxyMon instead of a 2 channel fNIRS system.

Besides the system and amount of CI users included in the study, our stimuli will be better in a sense that we will record auditory visual sentences spoken by speech therapists. We will also degrade the stimuli so that we can determine the performance of the subjects in difficult listening or viewing situations. We are doing tests even during the NIRS recordings so we can actually see trial by trial if people actually heard a sentence or not and hopefully we will be able to correlate that also with the NIRS responses.

Interesting. You know the McGurk-effect, are you also planning to study this one with fNIRS? So, if you see someone moving the mouth, making a certain tone, but you put a sound of a second tone on top of it people actually hear a third tone. 

Yes, indeed. So if I say ‘ba’, and you put a ‘ga’-sound over it, people will hear ‘da’. We don’t specifically do that right now but there are all kinds of audiovisual integrations possible. In our first paper we simply discuss if we add two sorts of information, and ask: Does the information get better? 

The McGurk effect is actually a very interesting phenomenon because it does not get ‘better’ it gets ‘different’. Of course in the speech world this is very important. We have thought about doing these kinds of experiments and we also recorded these stimuli. Some students are doing pilot experiments on this effect. If the results are interesting we will follow up on it. But first we will see what the CI people are doing with the speech sentences. Later on, we plan to do some more fundamental science like studying the McGurk effect. 

Do you have any tips or clues for the reader when doing a follow-up study on your work?

More channels, use reference channel subtraction, use well-designed stimuli. We also work with our students in trying to understand the more basic aspects of sensory processing with NIRS. Right now we used speech stimuli, which are very cognitive, very high level stimuli. If something happens in the brain we do not really now the cause of that. 

For example: We can present sentences like we are doing currently. Some people might not hear some of the words, therefore simply filling in these words from expectation, so they might be able to still understand the sentence. What does that mean for hemodynamic response? Would it mean that you will get some activation because they actually filled in the words, or will there not be any response because the acoustics are not there?

I think one of the things is that basic sensory processing with NIRS should be better studied. Comparable to presenting a visual grid and seeing whether there is some retinotopic organization, you should study whether tonotopic organization of the auditory system can be found with NIRS. I expect that will be very hard to find, but it would be interesting. At least then we would know better what the limitations and advantages of NIRS are. That would be the best way to go. 

How do you see the future of NIRS in the domain of audition?

Well, the best known neuroimaging tool is fMRI, but fMRI has a huge problem. Not only because of the magnetic components that will distort the cochlear implants and vice versa, but it also produces a lot of noise. In the field of audition, fNIRS is much better suited because it does not make any sound, therefore does not disturb the auditory system. It also does not affect or is affected by any implants that are needed to restore hearing to otherwise deaf people.

How does the core research of your article contribute to the auditory future? 

I think this paper is a straightforward, basic paper showing some limitations of our previous NIRS setup. I hope that people who read this paper will see the limitations regarding CI studies and will not make the same mistakes. Hopefully from our study, they can create better experiments for the future like the one we are doing now. 

We would like to thank Marc van Wanrooij and the Radboud University Nijmegen for this interview. 

Are you interested in doing a follow up study regarding hearing impared, CI or any other audiovisual research? It is possible to rent the simulation lab, expertise or the systems used by the Hearing and Implants Lab of the Donders Institute of Neuroscience in Nijmegen. More info

Read paper

System used in this paper: OxyMon

 
 
Previous
Previous

YouTube: Product impression

Next
Next

2fNIRS workshop: Hyperscanning with the OctaMon