Poster Presentation

Search Abstracts | Symposia | Slide Sessions | Poster Sessions

Localizing visual mismatch responses by hearing signers of American Sign Language (ASL) using Magnetoencephalography (MEG)

Poster Session B, Friday, September 12, 4:30 - 6:00 pm, Field House
This poster is part of the Sandbox Series.

Yuting Zhang1, Tzuhan Zoe Cheng1, Christina Zhao1, Qi Cheng1; 1University of Washington

Mismatch responses (MMR) provide a window into the brain’s automatic detection of stimuli changes, which have been used to study pre-attentive linguistic processing at various levels. While many auditory MMR (aMMR) studies have reported automatic phonological and lexical processing localized to the fronto-temporal networks, only one EEG study has looked at visual MMR (vMMR) in a sign language. They found enhanced responses by deaf Hong Kong Sign Language (HKSL) signers to lexical signs (vs. non-signs) around 230ms compared to hearing non-signers, suggesting a similar automatic lexical processing in the visual modality but with topological differences. Using Magnetoencephalography (MEG), the current study established a vMMR paradigm in American Sign Language (ASL) to better localize vMMR and to examine the role of sign language experience on visual processing among hearing signers and non-signers. We used a visual oddball paradigm where deviants are interspersed within standards about 15% of the time. Each block includes one standard-deviant pair that differs only in the handshape. Two ASL lexical signs (BOY: flat-O handshape at forehead; KID: horn handshape at nose) were selected, and two non-signs were created by switching the handshapes of the real signs (BOY-fake: horn handshape at forehead; KID-fake: flat-O handshape at nose), creating four standard-deviant pairs, two with real-signs as deviants and two with non-signs as deviants. Static sign pictures were presented in peripheral visual fields, and the participants were instructed to focus on the change of central fixation cross as a distraction task. We are aiming for 16 participants in each group. Here we report preliminary group findings from 13 hearing signers (mean age = 32; average years of signing = 13.8; ASL-CT score = 23.8) and 10 age-matched hearing non-signers. The vMMR is calculated by subtracting the evoked standard response from the evoked deviant response for each standard-deviant pair. When lexical signs served as deviants, non-signers showed a ventral stream processing pattern with activations in primary visual areas and inferior/middle temporal regions. In contrast, signers showed earlier vMMR responses (140-180ms) primarily involving the dorsal stream, with activations in superior/inferior parietal regions, sensorimotor cortex, and Broca’s area and insula, which indicates a role of sign language experience on the automatic visual processing of the lexical signs. When non-signs served as deviants, both groups showed more distributed activation patterns, though signers still showed earlier vMMR responses. These findings suggest that sign language experience shapes pre-attentive visual processing for ASL lexical signs. Specifically, non-signers predominantly rely on the ventral visual pathway for object recognition, whereas signers recruit the dorsal pathway for both visual and language processing, integrating visual, spatial, and motor information crucial for sign language representation. This study provides novel MEG evidence for modality-specific adaptations of automatic language processing in the brain. We will use whole-brain permutation cluster tests as well as ROI-based analyses to statistically test group differences once the data collection is complete.

Topic Areas: Signed Language and Gesture, Methods

SNL Account Login


Forgot Password?
Create an Account