Search Abstracts | Symposia | Slide Sessions | Poster Sessions
Shared neural representations of visual speech across modalities are related to reading ability in deaf and hearing individuals
Poster Session D, Saturday, September 13, 5:00 - 6:30 pm, Field House
Samuel Evans1,3, Cathy Price2, Tae Twomey2, Mairéad MacSweeney1; 1UCL, Institute of Cognitive Neuroscience, 2UCL, Wellcome Centre for Human Neuroimaging, 3KCL, Neuroimaging department
Learning to read provides access to life-long educational and vocational opportunities. Some, but not all, deaf children find learning to read challenging due to reduced access to both signed and spoken language. Understanding what drives variation in reading outcomes and the neural processes underpinning it is important for developing neurobiologically informed reading interventions. Speechreading (or lip-reading) is positively associated with reading ability in deaf and hearing readers (Kyle et al., 2016). This suggests that visual speech may provide representations of spoken language that support reading development. At a neural level, a recent study with hearing adults showed that the perception of words presented in auditory alone and visual alone conditions evoked similar neural patterns in the bilateral superior/middle temporal cortex (STC/MTC) (Van Audenhaege et al., 2024). Here, we identified shared sublexical neural representations of visual speech in deaf and hearing adults. Further, we tested the hypothesis that the strength of shared encoding of sublexical features is linked to reading ability. We scanned 22 severe-to-profoundly deaf adults and 25 hearing adults with fMRI in 3T. Before scanning, participants completed the Vernon-Walden Reading Test. During scanning, the hearing group attended to 8 words sharing initial, vowel and final consonants. These were presented in visual alone and auditory-alone conditions. The deaf group were presented with the same words as visual speech (visual alone) or dynamic text (cursive text revealed letter-by-letter) to encourage sublexical processing. Accuracy on an occasional 1-back monitoring task was high across groups and stimulus type. We used the cross-validated Mahalanobis distance to quantify dissimilarity between neural patterns evoked by each word in each stimulus type and tested these distances against two models: (1) a lexical model in which each word was predicted to be maximally and equivalently dissimilar to all other words and (2) a sublexical model reflecting dissimilarity driven by number of shared sublexical units (this model only included off diagonal distances to exclude dissimilarity comparisons to the same word). We used a searchlight to find regions in which there were reliable within stimulus distances and tested the models against the within and across stimulus distances in these areas, as we assumed that within stimulus sublexical representation would be necessary for common across stimulus representation. In hearing participants, consistent with previous findings, we found shared neural patterns for visual and auditory speech in bilateral STC/MTC. This reflected lexical dissimilarity in the left and sublexical structure in the right STC/MTC. Importantly, the extent of shared sublexical structure correlated positively with reading ability. In deaf participants, common sublexical representations for text and visual speech were found in the left STC/MTC. Again, the strength of this encoding correlated positively with reading ability. In summary, we found shared lexical and sublexical structure in the STC/MTC in deaf and hearing individuals. In both groups, the strength of encoding of shared sublexical representations of visual speech related to reading outcomes, in left STC/MTC in the deaf and in right STC/MTC in the hearing group. These findings provide neurobiological evidence that visual speech supports reading development.
Topic Areas: Reading,