Poster Presentation

Search Abstracts | Symposia | Slide Sessions | Poster Sessions

Acoustically degraded speech slows prediction in online sentence comprehension: evidence from the visual world paradigm and pupillometry

Poster Session D, Saturday, September 13, 5:00 - 6:30 pm, Field House

Griffin Dugan1, Ryan M. O'Leary1, Arthur Wingfield1; 1Brandeis University

Many individuals with severe to profound hearing loss use cochlear implants (CIs) to interact with the auditory world. These implants, arranged tonotopically on the basilar membrane of the cochlea, directly stimulate the auditory nerve and can often restore the percept of sound. However, the acoustic clarity of the signal transmitted through a CI differs markedly from typically perceived speech. Despite this, many CI users perform well on tasks such as word and sentence recognition in clinical testing environments. Less understood is why CI users, even those who perform well on such clinical tests, report that keeping up with everyday listening is difficult and exhausting. We suggest that clinical tests of word and sentence perception are insensitive to the continual temporal demand of listening in everyday life. The possibility exists that even when speech perception is successful, increased difficulty due to the degraded clarity of the speech may delay the predictive processes necessary for fluid semantic integration. This study addresses this question by employing the visual world paradigm to index the speed at which a listener can predict a word within a sentence using eye gaze, using a simulation of CI signal. Forty young adult listeners with age normal hearing were presented with predictable sentences that were either heard in clear speech or processed to sound like a CI using noise-band vocoding (6 channels). To manipulate temporal demands, sentences were either heard at a normal rate or at 50% of their original playing times using time-compression. Eye gaze movements were tracked in order to determine an early predictive point in the sentence at which the listener understands the sentence sufficiently to correctly predict how the sentence will end. The latency to which the participants used the computer mouse to click on the selected word was taken as an index of their confirmatory decision. Additionally, listeners’ pupil size was tracked at and after the onset of the sentence-final-word to measure differences in the trajectories of cognitive effort between conditions. Results demonstrate that prediction of the sentence-final-word was delayed when speech was vocoded to sound like a CI or increased in speech rate using time compression. When speech was both vocoded and time-compressed, the ability to predict the sentence-final-word was especially late, occurring around the same time as the onset of the sentence-final-word. Pupillary results indicate that cognitive effort remained higher for an extended period of time after the sentence was heard in the more difficult conditions. Results are interpreted as reflecting a processing delay between the input of the speech signal and the comprehension of the sentence’s meaning for those who face auditory challenges.

Topic Areas: Speech Perception,

SNL Account Login


Forgot Password?
Create an Account