Matt Winn

Au.D., Ph.D.

University of
Minnesota

 

 

Research

Here you will find information about the
Listen Lab
at the University of Minnesota in the Twin Cities.

The lab is supported by the NIH Institute on Deafness and Other Communication Disorders (NIDCD) grant R01DC017114 "Listening effort in cochlear implant users"

 

Matthew Winn (director) is also currently supported by the NIH Division of Loan Repayment.

These are the main areas of research
in the lab:

Listening effort

Cochlear Implants

Speech perception

Acoustic Cue weighting

Binaural Hearing

Data visualization

 

Listening effort

tired tired

People with hearing impairment need to put in more mental effort for routine conversational listening. As a result, by the end of the day there can be little energy left for socializing, playing with the kids, or other adventures. Audiologists frequently hear stories from patients who once enjoyed fun things like the theater, dining out, game night, church and the comedy club, but who now think it’s simply not worth the hassle. Numerous published surveys suggest that listening effort likely plays a role in the increased prevalence of sick leave, unemployment, and early retirement among people with hearing impairment.
At the Listen Lab, we aim to measure listening effort involved in speech perception, and look at effort in various ways.

 

One of the main tools we have for measuring listening effort is measuring pupil dilation. When a person is under more stress and tries harder, the pupils will dilate more. We use that as a measure of how difficult it is to listen to speech.

pupils

What affects listening effort?

The acoustic clarity of sentences

Winn, Edwards and Litovsky (2015) showed that systematic degradations in the frequency resolution of speech are related to corresponding systematic increases in listening effort

The presence or absence of contextual cues within a sentence 

Winn (2016) found that sentences that have internal coherence (e.g. "The detective searched for a clue") are easier to process than sentences with unpredictable words (e.g. "Nobody heard about the clue")

Using a cochlear implant

In the same study by Winn (2016), individuals who use cochlear implants showed delayed processing of contextual cues, possibly suggesting slower language processing because of the poor sound clarity.

Distracting sound just after a sentence

In a study by Winn and Moore (2018), sentences were heard by listeners with cochlear implants. We suspected that they used an extra moment after the sentence to think back and "repair" words that were misperceived along the way. To test this, we inserted a bit of noise or extra speech after the sentence, and found that this has a negative impact on their ability to use context to reduce listening effort.

Speaking rate

When speech is slower, listeners with cochlear implants are more likely to experience reduced effort and better processing of contextual cues when speech is spoken more slowly (shown in a yet-unpublished study by Winn and Teece, 2020)

 

 

There are MANY questions that remain to be explored with regard to listening effort.

For example:

How can listening effort measures be adapted for use in an audiology clinic? (ongoing work by Steven Gianakas and Matt Winn)

How is effort affected by your knowledge of conversation topic?

How is effort affected by the talker's accent and prosody? (ongoing with by Maria Paula Rodriguez)

Can we change the processing parameters of a CI or a hearing aid so that effort is reduced?

How do lab-based measures of momentary effort relate to a person's experience of listening effort throughout a day and a week? 

How are measures of effort consistent with or inconsistent with measures of speech intelligibility?

(return to top)

 

Cochlear implants

Cochlear implants (CIs) provide a sensation of hearing to people who have severe to profound deafness and who choose to participate in the hearing/speaking community. The microphone and speech processor receives sound through the air and converts the sound information into a sequence of electrical pulses that are used to stimulate the auditory nerve. This is designed to parallel the normal process of hearing, where mechanical movements in the ear are translated into electrical stimulation for the nerves.
The Listen Lab research on CIs focuses on the representation of spectral (frequency) information, which is known to be severely degraded. We are interested in the perceptual consequences of degraded spectral resolution in terms of success in speech perception, acoustic cue weighting, and the effort required to understand speech.

cochlear implants

(return to top)

 

Using speech to learn about the auditory system

A central technique of the lab is to use speech sounds to learn about the auditory system. We are interested in developing ways to create high-quality realistic speech stimuli that are suitable for testing auditory processing.

Speech contains a variety of acoustic components that stimulate the auditory system in the spectral, temporal, and spectro-temporal domains. There are so many dimensions to explore! By exploiting these properties of speech, we can learn a lot about the corresponding properties of the auditory system.

Such exploration is typically done with non-speech sounds in psychoacoustic experiments. There is a rich history of psychoacoustics, but a surprising lack of connection to real components of speech sounds. We hope to bridge that gap and, in the process, learn something valuable about how to understand the perception of speech by people with normal hearing and by people with hearing impairment


For any particular acoustic cue that you’re interested in, there is usually at least one speech contrast that depends on that cue.

Together with Christian Stip (University of Louisville), Matthew Winn has authored a chapter in the Routledge Handbook of Phonetics that introduces some ways in which thinking about the auditory system and speech acoustics at the same time can lead to a richer understanding of each.

(return to top)

Acoustic cue weighting

With so many aspects of speech changing at the same time, people have multiple options for different “strategies” to identify speech sounds.
A good analogy to understand cue weighting is the different cues at a traffic light.

green light

 

red light
These people can use different strategies to obtain the same exact information – that it’s okay go, or it’s time to stop. Other cues include seeing other cars move around you, or hearing blaring horns from cars behind you (but not in the Midwest!)

The Listen Lab studies how multiple cues in speech sounds can be decoded by people who try to identify the speech. This is particularly important for understanding hearing impairment, which can force some people to tune in to cues that are different than the ones used by people with normal hearing.

speech
Some of our previous work suggests that listeners increase reliance upon temporal cues when frequency resolution is degraded. This has particular implications for people who use cochlear implants, because they are known to experience especially poor frequency resolution.
Ongoing work explores how cue weighting is connected with listening effort.

 (return to top)

 

Binaural Hearing

Binaural hearing refers to the coordination of both ears to learn about sounds in the environment. It's more than just hearing on the left and right; it's the ability to know where sounds are coming from, and to distinguish one sound from a background of noise. Binaural hearing relies on some of the fastest most precise coding in the brain to be successful.

At the Listen Lab, we have explored binaural hearing sensitivity using high-speed eye-tracking methods that demonstrate the speed, certainty and reliability of our judgments of sound cues. We hope to use this paradigm to test basic psychoacoustic abilities, especially in cases where it is difficult to test (e.g. in children), difficult to coordinate the two ears (e.g. hearing with cochlear implants), or in cases where the binaural system might have been damaged by traumatic brain injury or blast exposure.

(return to top)

 

Statistical modeling and data visualization

Statistical modeling is an essential part of any research. In the lab, we are enthusiastic about finding the most effective ways to describe data and the behavior that leads up to it.

growth curve analysis

Thoughtful data visualization is an effective way to help others understand your research. I strive to find ways to visualize data in ways that help to reveal unexpected patterns, or to convey information in ways that facilitate learning.

To create visualizations, I prefer to use the ggplot2 package in the R programming environment.

You can find a fun new visualization of Marvel's Avengers character scripts here

You can also view some interesting visualizations of vowel acoustics here

 

spectral tilt continuum group comparisons

Effort release plot

temporal cues

MDT

Envelope compression

Pupillometry data

Voice onset time

Electric rippled spectra

Electric rippled spectra

 

slope differences

 

(return to top)