Poster Presentation

Search Abstracts | Symposia | Slide Sessions | Poster Sessions

Neural Decoding of the Semantics of Silently Imagined Answers

Poster Session C, Saturday, September 13, 11:00 am - 12:30 pm, Field House

Jiaqing Tong1, Jeffrey Binder1, Leonardo Fernandino1, William Gross1, Andrew Anderson1; 1Medical College of Wisconsin

Strokes that damage phonological brain systems at the core of the middle cerebral artery territory can leave people permanently incapable of finding the words to convey their thoughts as speech or writing, despite remaining intellectually intact and able to understand language. As verbal recovery prospects in severe cases of such phonological retrieval deficits are often low, and current Augmentative and Alternative Communication technologies have limited utility (e.g. synthesizing pre-scripted speech via a tablet), there is a need to develop new methods to assist communication. Recently introduced semantic brain decoding models that transcribe the gist of comprehended language and imagined speech from fMRI data may present a foundation for new communication technologies for aphasia. However, because decoding models have been tested on either comprehension or imagined speech in isolation, it is unknown how well they can distinguish the semantics of imagined language production from comprehension when the two processes overlap, as would be essential for a conversational application. It is also unknown how semantic decoding approaches generalize to people with aphasia. Here we present work in progress that seeks to address these questions. The overarching hypotheses are: (1) The semantics of imagined answers can be selectively decoded from fMRI activation, and: (2) This ability will generalize to people with aphasia when semantic cognition is preserved despite phonological damage. Six neurotypical participants and one participant with phonological impairment from stroke were recruited to date. To enable the objective evaluation of the ability to decode imagined answers, all participants attended a visit where they memorized the profiles of five distinctive fictional characters, which would form the basis for questioning. Profiles were presented as memorable AI-generated videos e.g., introducing Mary from NYC, who is a policewoman, paints as a hobby and has a pet chameleon. Participants’ ability to memorize the profiles was confirmed via a card arrangement task (matching a “Mary” card to “Painting” not “Cycling”). Participants later underwent fMRI as they imagined the answers to questions on the characters’ profile features, e.g. Stimulus: “What is Mary’s job?”, Silently Imagined Answer: “She’s a police lady”. To evaluate whether the semantics of imagined answers could be distinguished from questions in fMRI data, a partial correlation-based Representational Similarity Analysis was deployed. To model the semantics of Q&A’s, the corresponding texts were processed through GPT-2-medium. A single vector for each question and each answer was derived as the pointwise average of word embeddings extracted from Layer 16 (of 24). The representational geometry of fMRI activation extracted from the Default Mode Network was then correlated against the Answer model, controlling for the Questions. Statistical significance was evaluated via permutation tests. Significant positive RSA coefficients were detected for 4/6 neurotypical participants. In the participant with aphasia, RSA coefficients reflecting the semantics of the question, but not answer were detected with ~5sec delay relative to neurotypical adults. This study provides early evidence that the semantic content of imagined answers can be selectively decoded from fMRI data independent from comprehension processes and identifies challenges that surround adapting decoding approaches for aphasia.

Topic Areas: Disorders: Acquired, Meaning: Lexical Semantics

SNL Account Login


Forgot Password?
Create an Account