Search Abstracts | Symposia | Slide Sessions | Poster Sessions
Decoding and Characterizing the Intracranial Representation of Semantic Information
Poster Session E, Sunday, September 14, 11:00 am - 12:30 pm, Field House
Christophe Smith1, Bryant Barrentine1, John Settles3, Sophia Inchyna1, John Hale4, Adam Mamelak2, Ueli Rutishauser2, Marshall Holland1, Jasmine Thum1, Nicole Bentley1, Matthew Nelson1; 1University of Alabama at Birmingham, 2Cedars-Sinai Medical Center, 3University of Georgia, 4John s Hopkins
An emerging treatment for speech loss resulting from a neurological condition is a language-based brain machine interface (BMI). Current state of the art systems target the sensory-motor cortex to decode attempted speech, however, such systems would have relative difficulty distinguishing phonetically similar words like ‘fork’ and ‘fort’. Incorporating even crude semantic information, like the knowledge that an item is a kitchen utensil rather than a building, should improve these systems’ performance. Here we investigate the feasibility of this while also better characterizing neural semantic representations across cognitive tasks. We performed intracranial (sEEG and ECoG) recordings from 18 epilepsy patients while they performed a battery of 5 semantic processing tasks comprised of 3 language production tasks (picture naming, semantic category naming, and a category-fluency/noun-generation task) and 2 comprehension tasks (word-to-picture matching and semantic category matching). The comprehension tasks used both audio and text presentation of each word stimulus in different trials. The stimulus in each trial was a noun from one of 15 concrete semantic categories (tools, animals, foods, etc.). We trained linear support vector machines to classify the semantic category as well as the animacy of the item each trial within a single task based on the time-windowed high gamma power (70-150 Hz) from all channels recorded in a given patient, with ample coverage of the left-hemisphere semantic language network in our dataset. The highest performing semantic category decoder had a 79.4% accuracy (10-fold cross-validation; chance = 6.7%), while the best animacy decoder had a 100% accuracy (chance = 50%). These results are a marked improvement over prior studies such as Rogers et al. 2021 which achieved a maximum accuracy of 75% for animacy decoding. We evaluated cross-task model generalization and found that models trained on a production task generalized best to other production tasks and generalized more weakly, but still significantly, to comprehension tasks. Models trained on comprehension tasks generalized more weakly, but still significantly, across all tasks, and did so to a comparable degree to production and comprehension tasks. Note that these models are trained to discriminate between categories within a task, thus these results do not reflect, for example, generic motor-preparatory activity unique to speech. Altogether these results suggest that there is significant similarity of semantic codes between production and comprehension, but with marked differences that merit further investigation. Based on rank consistency analysis of SVM weights across ROIs, the left Inferior Frontal Gyrus contributes the strongest to semantic category decoding in our data, with other major contributions coming from the left Temporal Pole (TP), the left hippocampus, and the left posterior Middle Temporal Gyrus. The TP appears to specifically contribute more strongly to finer category decodes while the hippocampus may contribute more to superordinate (animacy) decodes. These results demonstrate the feasibility of decoding semantic information from the distributed semantic network in a way that can benefit language BCIs and reveal that some components of the neural code for semantic information are shared between production and comprehension, while some are specific to each of those.
Topic Areas: Meaning: Lexical Semantics, Computational Approaches