Poster Presentation

Search Abstracts | Symposia | Slide Sessions | Poster Sessions

Acoustic degradation of speech through time compression and background noise independently affects content memory and neural tracking of speech

Poster Session A, Friday, September 12, 11:00 am - 12:30 pm, Field House

Sung-Joo Lim1, Mako Ikeda1, Walter Dych1; 1Binghamton University

The growing use of online education platforms offers students greater convenience and flexibility to access knowledge and content remotely. However, this also means students often attempt to learn in environments that are less acoustically controlled compared to live lectures. For example, students often prefer to watch recorded lectures using accelerated playback speeds and often in places where other irrelevant noise is present. While listeners can rapidly adapt to acoustically distorted speech signals like time-compressed speech or speech in noise, memory retention of the speech content declines with higher time-compression rate, especially in adverse listening conditions. However, the neural bases of the content memory detriment from acoustic degradation through accelerated speech and/or background noise remains unclear–that is, whether the memory detriment is due to the brain’s inability to track speech signals during encoding. Here, we examined how speech distortions from time-compression influence neural tacking of speech, and whether the presence of background noise exacerbates this effect. We recorded and modeled the electroencephalogram (EEG) of 28 participants who listened to six recorded audio lectures at varying time-compression rates (1.0x, 1.5x, 2.0x) and background noise levels (quiet vs. babble noise at +10 dB SNR). We assessed participants’ ability to learn and retain the spoken content based on lecture-specific content-knowledge quizzes administered immediately following each lecture. To determine neural tracking of speech, we used a backward modeling approach with the speech acoustic feature (i.e., speech envelope onsets). Behaviorally, listeners’ memory of the lecture content was significantly worse with higher time-compression and in the presence of background noise. The fidelity of the neural representation of the speech signal, indexed by speech envelope reconstruction accuracy, was relatively unaffected by the time-compression rate, but was significantly lower when background noise was present. In addition, the strength of neural tracking, assessed through the amplitudes of temporal response function (TRF) peaks, was significantly reduced with background noise, as reflected in attenuation of early TRF peaks (P1 and N1). In contrast, speech neural tracking was enhanced with higher time-compression rates, particularly reflected in the later (N1 and P2) TRF peaks. These findings suggest that while background noise can disrupt early stages of speech encoding, potentially due to masking acoustic features of speech, the brain can robustly track faster rates of speech, possibly by engaging top-down processes related to attention. Overall, these findings indicate that although both time-compression and background noise degrade the speech signal, they can independently create the bottleneck at different stages of speech processing, which can ultimately impair content memory.

Topic Areas: Speech Perception,

SNL Account Login


Forgot Password?
Create an Account