Search Abstracts | Symposia | Slide Sessions | Poster Sessions
Cortical Tracking of Manual and Facial Articulators in Sign Language Processing
Poster Session A, Friday, September 12, 11:00 am - 12:30 pm, Field House
Joaquin Ordoñez1,2, Chiara Luna Rivolta1, Mikel Lizarazu1, Brendan Costello1,3; 1Basque Center on Cognition, Brain and Language, 2University of the Basque Country (UPV/EHU), 3Ikerbasque (Basque Research Foundation)
Cortical tracking of spoken language refers to the synchronization of brain oscillations with the speech envelope. While research has primarily focused on spoken language, some studies have begun exploring cortical tracking in sign language, which relies on the visual modality and coordinated movement of facial and body articulators. These articulators (e.g., hands, mouth) serve distinct linguistic functions and convey varying amounts of information. In previous work, our group used Kinect v2 motion tracking and found that hearing signers tracked hand and head movements more than non-signers in right parietal regions. This study extends that work by examining whether hearing signers also track the kinematics of facial articulators, which carry linguistic information but could not be reliably captured with Kinect. We used the same MEG dataset, collected from two groups of hearing Spanish speakers (14 proficient signers of Spanish Sign Language (LSE) and 14 non-signers), but extracted facial and body motion data from the video stimuli using MediaPipe (Lugaresi et al., 2019), a deep-learning-based computer vision model. The stimuli consisted of 1-minute videos showing narratives in LSE (known to signers) or Russian Sign Language (RSL, unknown to all participants), produced by two deaf native signers of each language. Brain activity was recorded with MEG while participants watched 20 videos (10 per language, 5 per model), followed by a recognition task using a five-second video probe and a foil to ensure engagement. To characterize the visual linguistic signal, we extracted the speed time series from several body articulators (hands, head, torso), along with facial features: mouth aperture (distance between upper and lower lips); mouth movement (histogram of oriented gradients, which reflects how much the lips are moving and in which directions); face orientation in terms of pitch (up and down orientation of the face) and yaw (right and left orientation of the face). The results from the body articulators confirmed the findings of the previous study, demonstrating that MediaPipe (which only requires a standard video recording) provides comparable results to the earlier method (which required specialized cameras and setups) in terms of cortical tracking. However, for facial articulators, we found no evidence of tracking nor any group differences. To assess whether our facial kinematic measures lacked sensitivity, we analyzed cortical tracking of mouth aperture during a spoken language task, where participants viewed Spanish (known) and Russian (unknown) narratives. We found that signers showed stronger tracking of mouth movements associated with spoken Spanish than non-signers in frontal areas, while both groups similarly tracked mouth movements in Russian. These findings show that speech-related mouth movements are tracked in challenging contexts, such as the unknown language condition. Hearing signers also track mouth movements in their known language, possibly due to their experience with a visual language. Importantly, these results demonstrate that our measure is sensitive enough to capture mouth movement. Taken together, the results suggest that cortical tracking of facial articulators may not occur during sign language processing and that tracking in sign language primarily relies on manual articulators.
Topic Areas: Signed Language and Gesture,