Search Abstracts | Symposia | Slide Sessions | Poster Sessions
Sign perceivers allocate gaze based on contextual cues during ASL sentence comprehension
Poster Session A, Friday, September 12, 11:00 am - 12:30 pm, Field House
Paris Gappmayr1, Amy Lieberman1; 1Boston University
Sign languages such as American Sign Language (ASL) unfold dynamically over time, requiring perceivers to make real-time decisions about where to direct their gaze to capture linguistic information as it occurs and before it disappears. Skilled deaf signers look mostly at the face during sign perception (e.g., Emmorey et al., 2009), but occasionally direct gaze to the hands when perceiving fingerspelling (Gappmayr & Lieberman, 2024). However, signers cannot always predict when in the linguistic signal fingerspelling will occur, and their need to attend to the hands likely varies as a function of their ability to anticipate which word will be fingerspelled. We sought to investigate allocation and timing of gaze patterns to the hands and the face when perceiving signed input. We asked whether signers allocate gaze to the hands longer in conditions where they expect fingerspelling, such as when the sentence context suggests a name is upcoming (e.g., C-H-R-I-S) compared to when they expect a sign (e.g. sentence context suggests the next word would be the sign APPLE) but they were instead provided with the word in fingerspelled form (A-P-P-L-E). We also provided conditions where ability to predict the identity of the fingerspelled word is low (e.g. they do not know what name will be produced), compared to conditions where the sentence context suggests a commonly fingerspelled word (such as B-A-K-E-R-Y). Eleven Deaf adults who were exposed to ASL before the age of five completed the eye-tracking experiment (data collection ongoing). Participants’ eye movements were recorded as they watched 40 pre-recorded sentences in ASL and responded to occasional comprehension questions to maintain attention. Stimuli sentences were constructed to manipulate fingerspelling expectation and target word predictability. An additional 20 control sentences were included that had no fingerspelling. We calculated whether signers gazed at interest areas over the face or the hands (static area below the shoulders) throughout each sentence, and performed a divergence point analysis using 50 ms binned eye-tracking data focused on the proportion of looks to the hands vs. face. For each trial, timing was transformed relative to the onset of the fingerspelled word. We ran independent t-tests at each time bin to compare the effects of fingerspelling expectation and target word predictability on the mean proportion of looks to the hands. In the expected vs. unexpected fingerspelling pair (e.g. B-A-K-E–R-Y vs. A-P-P-L-E), divergence occurred 500 ms after onset with the effect lasting for 250 ms (p < .001). In the predictable vs. unpredictable word pair (e.g. B-A-K-E-R-Y vs. C-H-R-I-S), divergence occurred 300 ms after onset and persisted for 400 ms (p < .001). In the control trials with no fingerspelling (e.g. sign APPLE), gaze remained on the face throughout the sentence. Results suggest that signers look longer at the hands when fingerspelling occurs unexpectedly or when they are unable to predict the word that will be produced. The unique visual modality of sign languages paired with overt behavioral data from eye-tracking allows for inferences about real-time language processing and prediction.
Topic Areas: Signed Language and Gesture,