Search Abstracts | Symposia | Slide Sessions | Poster Sessions
Mapping and decoding semantic representations in aphasia after stroke
Poster Session B, Friday, September 12, 4:30 - 6:00 pm, Field House
Jerry Tang1, Carly Millanski1, Lisa D. Wauters1, Shilpa Shamapant2, Stephen M. Wilson3, Alexander G. Huth1, Maya L. Henry1; 1University of Texas at Austin, 2Austin Speech Labs, 3University of Queensland
People with aphasia often report knowing what they want to say but not being able to say it. This suggests that semantic representations are relatively preserved but the mapping to lexical-phonological output is disrupted. One potential way to help people with aphasia is to predict what they want to say by decoding their semantic representations. Semantic decoding has been demonstrated in neurologically healthy participants by using naturalistic stimuli to map the concepts that are encoded in each cortical region. Given new brain responses, these semantic maps can be used to predict the concepts that the participants are thinking about. However, most neuroimaging studies in participants with aphasia have relied on controlled stimuli that lack the conceptual coverage required to map the semantic system. As a result, it is unclear how the organization of the semantic system is affected in aphasia, and whether semantic representations are sufficiently preserved to be accurately decoded. In this study, we used functional magnetic resonance imaging (fMRI) to record brain responses from two participants with stroke-induced aphasia and three neurologically healthy controls. One participant had mild-moderate aphasia predominantly affecting language production, while the other had severe expressive-receptive aphasia and motor speech impairment. All participants watched one hour of silent movies and listened to three hours of narrative stories. We separately recorded brain responses from the neurologically healthy controls while they listened to a larger set of narrative stories. We used the larger set of narrative stories to train encoding models for mapping linguistic representations and semantic decoding models for reconstructing continuous language from new brain responses. We finally transferred these models from the neurologically healthy controls to the participants with aphasia by functionally aligning their brain responses to the shared movies and stories. To assess how the functional neuroanatomy of speech-language processing is affected in aphasia, we compared prediction performance across encoding models that capture information from different linguistic levels. We found that semantically selective regions are consistent between participants with aphasia and the neurologically healthy controls in spared cortical regions. To assess how the organization of the semantic system is affected in aphasia, we visualized the weights of a lexical semantic encoding model, which predict the concepts that are encoded in each cortical region. We found that the organization of semantic concepts is also consistent between participants with aphasia and the neurologically healthy controls in spared cortical regions. To test whether semantic representations can be decoded in people with aphasia, we applied the semantic decoders to brain responses while the participants watched new movies and listened to new stories. We found that the decoders could successfully recover the gist of the movies and stories. Together our results indicate that people with stroke-induced aphasia can have preserved semantic representations, and that the organization of these representations is consistent with that of neurologically healthy controls. These results demonstrate the potential for semantic brain-computer interfaces to improve communication in people with aphasia.
Topic Areas: Disorders: Acquired, Meaning: Lexical Semantics