Search Abstracts | Symposia | Slide Sessions | Poster Sessions
Speech-in-noise representations in the developing human brain
Poster Session E, Sunday, September 14, 11:00 am - 12:30 pm, Field House
Kevin Sitek1, Jason Bohland2, Ashley Parker2, Bharath Chandrasekaran1, Amanda Hampton Wray2; 1Northwestern University, 2University of Pittsburgh
Understanding speech in noisy environments relies on the successful integration of canonical speech perception processes along with active processes that enhance the signal and reduce the noise (Davis & Johnsrude, 2003; Eckert et al., 2016). In adults, beyond the speech processing regions in posterior superior temporal cortex and inferior frontal gyrus, speech-in-noise processing is supported by the speech motor system (Du et al., 2014) as well as an extra-auditory network including anterior insula, anterior cingulate, and middle frontal gyrus (Vaden et al., 2013) that especially kicks in under more challenging conditions. In children, speech processing networks have been shown to emerge slowly and shift gradually over the course of typical development. Children often demonstrate behavioral challenges in speech-in-noise processing, which has been correlated stimulus encoding fidelity with EEG (White-Schwoch et al., 2015). However, it is not clear how the specific neural processes underlying speech perception and speech-in-noise processing are instantiated during development. To understand the brain networks involved in processing speech in challenging listening environments in the developing human brain, we conducted a functional MRI speech-in-noise experiment with 7-to-13-year-olds. Participants performed a syllable identification task (“ba”, “da”, “ga”, or “ma”) in a Siemens 3 Tesla MRI during functional MRI acquisition. Syllables were spoken by one of four talkers (2 male, 2 female) and presented in one of five speech-shaped noise contexts (quiet and SNR = +8, 0, -2, and –6 dB). Participants completed three runs consisting of 160 trials each (two presentations per syllable, per talker, and per noise context) over 6.5 minutes per run (20 minutes total). Functional MRI data were preprocessed with FMRIPREP, which performs best-practice motion correction, image coregistration, template normalization, unwarping, noise component extraction, segmentation, and skull-stripping. We first investigated univariate fMRI contrast in quiet vs. in the highest noise conditions, where we found stronger fMRI responses to speech in quiet vs. speech in noise in bilateral superior temporal gyrus (STG). Conversely, responses in the head of the caudate were higher in noise compared to quiet, but these differences were not statistically significant. Next, to examine the representation of noise within the developing brain, we conducted representational similarity analysis (RSA), which allows us to directly compare stimulus characteristics (here, background noise level) with patterns of neural responses to the stimuli. We conducted searchlight RSA across the whole brain and found significant correlations between neural response and noise level dissimilarity matrices in left STG, right caudate head, and bilateral inferior frontal gyrus (IFG). Combining our two findings, while children more strongly activate broad canonical auditory and language regions in clear speech compared to noisy stimuli, background speech level itself is represented in specific regions that may come online to facilitate speech processing in challenging listening conditions. Future work will investigate the emergence over development of noise level representations for speech-in-noise facilitation, their relationship to behavior, and how these representations are modulated by developmental communication disorders.
Topic Areas: Speech Perception, Language Development/Acquisition