Search Abstracts | Symposia | Slide Sessions | Poster Sessions
When words hit harder: Neural speech tracking is stronger for angry than for neutral speech at the word rate
Poster Session E, Sunday, September 14, 11:00 am - 12:30 pm, Field House
Natálie Kikoťová1, Kateřina Chládková1,2; 1Institute of Psychology, Czech Academy of Sciences, 2Institute of Czech Language and Theory of Communication, Faculty of Arts, Charles University
Emotional speech plays a crucial role in navigating the social world. As a result, emotional words attract more attention than neutral words and more cognitive resources are allocated towards their processing (Fields & Kuperberg, 2012, NeuroImage; Kissler et al., 2009, Biological Psychology; Sander et al., 2005, Nature Neuroscience). In social settings, we encounter emotional language in the form of a continuous auditory stream rather than individual words. However, little is known about the neural processing of continuous emotional speech and the role of neural oscillations in the underlying mechanisms. The present study investigated the differences in the neural processing of ongoing emotional and non-emotional speech. — Twenty-six participants listened to recordings of angry and neutral conversation segments, as well as to speech-shaped noise, while their EEG was recorded. An example of an angry segment (in Czech, native language of the participants) was “Přestaň na mě takhle čumět. Ten tvůj debilní ksicht mě irituje” [Stop staring at me like that. This dumb face of yours is annoying me.]. Respective frequency bands were calculated based on the structure of the stimuli, corresponding to the rate of phrases, words, and syllables in the angry and neutral recordings. Neural speech tracking, quantified as oscillatory power and inter-trial phase coherence (ITPC) in the defined frequency bands, was analyzed. — The results revealed larger oscillatory power across the phrase, word and syllable band during exposure to angry speech in comparison to neutral speech (mean slope = 0.599, t =3.775, p < .001). This effect was most pronounced in anterior regions. There was a main effect of valence on intertrial phase coherence, indicating that phase coherence was lower in the angry condition (mean slope = -0.017, t = -5.042, p < .001). For ITPC, there was also an interaction effect of valence and rate (mean slope = 0.025, t = 5.251, p < .001). Comparisons of estimated means and 95% confidence intervals revealed that the effect of valence was in opposite directions for phrases and syllables (stronger ITPC for neutral speech) than for words (stronger ITPC for angry speech) and this was most pronounced at the anterior sites. — The stronger intertrial phase coherence for neutral speech in the syllable and phrase range reveals more accurate neural speech tracking for commonly encountered speech patterns. Interestingly, a reverse pattern seen for the word rate – i.e. stronger phase coherence in angry speech might be driven by better predictability of lexical content in emotional speech, potentially reflecting stronger top-down modulations in how the brain tracks speech. The present findings extend prior literature that mostly addressed event-related processing of emotional content in isolated words or phrases (Chen et al., 2013, Neuroscience Letters; Fields & Kuperberg, 2012, NeuroImage; Kissler & Herbert, 2013, Biological Psychology). Our study provides novel insights into the oscillatory dynamics underlying the processing of naturalistic, continuous emotional speech.
Topic Areas: Speech Perception,