Poster Presentation

Search Abstracts | Symposia | Slide Sessions | Poster Sessions

Generalizable Speech Somatotopy Revealed by Intracranial Stereo-encephalography

Poster Session C, Saturday, September 13, 11:00 am - 12:30 pm, Field House

Jinlong (Torres) Li1,2,3, Tessy Thomas2,3, Aditya Singh2,3, Nitin Tandon2,3,4; 1Department of Bioengineering, Rice University, 2Vivian L. Smith Department of Neurosurgery, McGovern Medical School, University of Texas Health Science Center at Houston, 3Texas Institute for Restorative Neurotechnologies, University of Texas Health Science Center at Houston, 4Memorial Hermann Hospital, Texas Medical Center

Human language production requires coordinated control of the vocal articulators to generate distinct phonological units composing meaningful words. Speech brain-computer interfaces (iBCIs) have provided communication alternatives for those who lost speech due to motor system disorders by decoding preserved articulatory representations from surface grids and penetrating arrays (Bouchard et al. 2013, Metzger et al. 2023, Willet et al. 2023). However, the capacity of depth electrodes to capture neural representations of articulatory control remains poorly understood. Here, we characterized broadband high-gamma (BGA; 70–150 Hz) neural dynamics from depth electrodes in 42 patients performing naturalistic sentence production tasks. Multivariate temporal receptive field (mTRF) encoding models were fit to both phoneme sequences and articulatory kinematic trajectories (AKTs) derived from an acoustic-to-articulatory inversion (AAI) model. In the speech-related ventral sensorimotor cortex (vSMC), highly correlated sites (r > 0.3) were identified both in the precentral gyrus and along the sulcal portion of BA4 (primary motor cortex) and BA6 (premotor cortex). Hierarchical clustering and low-dimensional projection of encoding filter weights from highly correlated electrodes (r > 0.3) revealed a structured organization of phonemic encoding by primary articulators (vocalic, lingual, labial). A prominent somatotopic division along the ventral-dorsal axis of the ventral sensorimotor cortex (vSMC) was observed, distinguishing tongue and lip representations. Notably, vocalic representations were localized deeper within the central sulcus compared to other supralaryngeal representations. In two bilingual participants who performed the task in both English and Spanish, phoneme and AKT encoding models successfully explained substantial cross-language neural variance. AKT models demonstrated superior cross-language generalization compared to phoneme-based models. At the single-electrode level, tuning to phonemic groups and articulatory gestures were preserved across languages. Low-dimensional projections of filter weights revealed a separable boundary between labial and lingual phones across both languages. Reconstruction of AKTs with seq-to-seq models preserved decoding fidelity despite models being trained on a different language, indicating a shared articulatory code across languages. These findings revealed a unifying articulatory motif underlying natural language production at the local field potential level and highlighted the capacity of depth electrodes for future speech iBCI applications. Furthermore, the demonstrated cross-linguistic generalizability suggests the potential for novel neural speech decoding strategies based on latent articulatory representations.

Topic Areas: Language Production, Speech Motor Control

SNL Account Login


Forgot Password?
Create an Account