Poster Presentation

Search Abstracts | Symposia | Slide Sessions | Poster Sessions

Hemispheric differences in speech perception: The role of right temporal regions in talker-specific learning

Poster Session B, Friday, September 12, 4:30 - 6:00 pm, Field House
This poster is part of the Sandbox Series.

Holly A. Zaharchuk1, Portia N. Washington1, Emily B. Myers1; 1University of Connecticut

The ease of everyday conversation belies the difficulty of low-level speech perception. Processing a speech signal requires the listener to repeatedly map a complex array of continuous acoustic cues onto a set of discrete phonetic category representations quickly and consistently. This challenging process is less effortful and more accurate when the listener’s language system is able to adapt to the talker’s unique phonetic space. Studies of talker-specific learning show that listeners not only are sensitive to the idiosyncratic phonetic details of a talker, but also use that information to recognize both who is talking and what is being said. Talker-specific perceptual learning is supported by a bilateral network of brain regions in temporal and frontal cortex. While left and right superior temporal gyri (STG) both show sensitivity to phonetic detail, there are two competing hypotheses regarding the division of labor between the two hemispheres. One proposal argues that the hemispheres differ in what information is represented, with left STG specializing in shorter spectral sweeps (e.g., stops) and right STG specializing in longer spectral sweeps (e.g., vowels). The other proposal is that the hemispheres differ in how information is represented, with the left being more categorical and the right being more gradient. More gradient representations of phonetic detail may also be related to more flexible processing, allowing right temporal regions to adapt more readily to novel talkers. In the present study, we test the hypothesis that right temporal regions are critical for talker-specific learning. We will investigate the role of the right hemisphere in representing talker-specific phonetic detail with a sparse sampling fMRI design. Participants will complete two different 1-back tasks in the scanner: in the 1-back talker identification task, participants will indicate whether two tokens were spoken by the same talker or not, and in the 1-back phonetic task, participants will indicate whether two tokens were the same word or not. The stimuli for this task will comprise tokens drawn from four five-point phonetic continua. These continua form a VOT-vowel matrix from “deal” to “teal” (VOT), “teal” to “till” (vowel), “till” to “dill” (VOT), and “dill” to “deal” (vowel). The tokens will be produced by four female talkers of Mainstream US English. Participants will complete three blocks each of the two 1-back tasks, with all six blocks comprising the same 64 tokens (16 stimuli x 4 talkers). We will use representational similarity analysis (RSA) to decode token information as a function of task. We hypothesize that attention to talker identity will enhance sensitivity to phonetic information in right temporal regions and diminish sensitivity to phonetic information in left temporal regions. Attention to phonetic categories should show the opposite pattern, with more fine-grained phonetic encoding in the left hemisphere. In other words, if right temporal regions are responsible for linking talker and phonetic information, then tokens will be represented more uniquely in these regions during talker identification than during word identification. This work will provide critical insight into the division of labor between the two hemispheres for speech perception.

Topic Areas: Speech Perception,

SNL Account Login


Forgot Password?
Create an Account