Search Abstracts | Symposia | Slide Sessions | Poster Sessions
Task-Invariant Decoding of Semantic Representations with Intracranial Human Neurophysiology
Poster Session B, Friday, September 12, 4:30 - 6:00 pm, Field House
Julien Dirani1,2, Raouf Belkhir1,2, Eliza M. Reedy2, Steven Salazar2, Catherine Liégeois-Chauvel2, Thandar Aung2, Arka N. Mallela2, Jorge A. González-Martínez2, Bradford Z. Mahon1,2; 1Carnegie Mellon University, 2University of Pittsburgh Medical Center
INTRODUCTION: The human brain's ability to store and retrieve semantic knowledge is a fundamental aspect of cognition, and in particular language. A key characteristic of semantic representations is that they must achieve invariance over inputs, ensuring that, for instance, the concept of a “knife” retains stable meaning across contexts, tasks (e.g., naming, reading), and input modalities (e.g., picture, word, sound). But what is the spatio-temporal neural signature of task-invariant in semantic representations? While prior work has significantly advanced our understanding of this question, it has typically relied on methods that emphasize either spatial or temporal resolution. Here, we build on prior work by using stereoelectroencephalography (SEEG), which offers direct neural recordings with both high spatial and temporal precision. METHODS: Fourteen pre-surgical epilepsy patients participated in a task involving picture naming and word reading with intracranial recording. Importantly, the task did not require explicit semantic judgments, allowing us to capture spontaneous semantic activation. Using SEEG data (local field potential) from all electrodes within each participant, we trained linear classifiers to distinguish among seven semantic categories—animals, body parts, clothes, fruits and vegetables, furniture, tools, and vehicles—at each time point (100 Hz) from stimulus onset (0 ms) to 800 ms. We then tested the models’ ability to generalize across the picture naming and word reading tasks at all pairs of timepoints (t_picture, t_word). This approach allows to identify representation of semantic categories that are de-confounded from task- and stimulus-specific representations. Timepoints where models successfully generalized across tasks indicated when such representations were activated, while the spatial patterns of the models revealed where in the brain these representations were localized. RESULTS: Our results show that semantic representations that were activated during picture naming at ~175–400ms after stimulus onset generalized to the word reading task at ~200–600ms. This cross-task generalization reveals the temporal dynamics of spontaneously activated, task-invariant semantic representations, in the absence of an explicit categorization task. Invariant semantic representations were distributed across the cortex but showed strongest localization in the left anterior temporal lobe (ATL), a region previously associated with modality-independent semantic processing. We also observed strong activation patterns in the posterior lateral temporal lobe, suggesting a broader network supporting task-invariant semantic knowledge. Finally, we also present preliminary results of connectivity analyses describing the functional networks supporting semantic invariance. CONCLUSION: Our findings provide electrophysiological evidence for the spatio-temporal dynamics of spontaneous, task-invariant semantic representations in the human brain. These results contribute to a growing body of evidence that semantic knowledge is supported by distributed, yet convergent networks.
Topic Areas: Meaning: Lexical Semantics, Language Production