Search Abstracts | Symposia | Slide Sessions | Poster Sessions
Slide Session C
Sunday, September 14, 2:45 - 3:30 pm, Elstad Auditorium
Talk 1: Stability and plasticity in the reading circuitry: combining insights from large-scale databases and a longitudinal intervention study
Maya Yablonski1, Jamie L. Mitchell1, Hannah L. Stone1,2, Mia Fuentes-Jimenez1, Jason D. Yeatman1; 1Stanford University, 2University of California, Santa Barbara
Reading is a complex skill that takes years to master. While extensive research has described the brain’s reading circuitry, it remains unclear how the experience of learning to read shapes brain development, versus how existing neural mechanisms scaffold learning to read. Here, we asked 1) Is there a connectivity blueprint linking the visual and language networks that establishes the bedrock of the reading circuitry? 2) Do these connectivity patterns change with learning? We first addressed these questions using two public datasets: In the Natural Scenes Dataset, we identified text-selective responses in precision 7T adult data (N=8), and characterized distinct functional connectivity patterns at the individual level. Then, we replicated these patterns in the developmental Healthy Brain Network (N=224; ages 7-20 years) across a wide range of reading levels (Yablonski et al., 2024). To examine whether these patterns change with learning, we conducted a longitudinal study where children with dyslexia completed an intensive reading intervention. Compared with public, large-scale cross-sectional databases, intervention designs allow for a close examination of the concurrent timecourses of change in reading ability and change in connectivity, thereby teasing apart developmental effects from the experience of learning to read. All children (N=44 intervention; N=42 controls; ages 7-13 years) completed functional magnetic resonance imaging (fMRI) scans before the intervention, immediately after its completion, six months and twelve months later. This dense sampling scheme allows us to delineate a trajectory of both short- and long-term changes in brain function and connectivity following the intervention. In each timepoint, children completed a functional localizer where they viewed text and non-text stimuli while performing tasks that did not explicitly require reading (Mitchell et al., bioRxiv; Stone et al., 2025). In addition, they completed a “resting state” scan where they watched a silent nature movie without language content. The functional localizer was used to define text-selective regions of interest (ROIs): The subregions of the visual word form area (VWFA1 and VWFA2), as well as frontal language regions on each individual’s cortical surface. Then, these ROIs were used as seeds in functional connectivity analysis of the independent movie-watching data, to explore the functional connectivity patterns of the reading network. We found that VWFA2 had elevated connectivity with frontal language regions, compared with the adjacent VWFA1 and neighboring control regions. Importantly, connectivity between VWFA2 and frontal language regions was present in children with dyslexia even before the intervention, and even in children where we could not detect cortical regions that selectively responded to text (44% of children with dyslexia). Further, these patterns were stable during learning as children improved their reading skills. These findings support a view of functional connectivity as a blueprint for the emergence of text-selective regions. The controlled intervention setting, combined with multiple timepoints per subject and multimodal imaging, brings us a step further beyond the insights that can be obtained from large databases, and elucidates the relationship between the experience of learning to read and brain connectivity within individuals.
Talk 2: Language-related cortical pathways in deaf signers: Core invariance and modality-specific variability
Patrick C. Trettenbrein1,2, Cheslie C. Klein1, Emiliano Zaccarella1, Angela D. Friederici1; 1Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany, 2University of Göttingen, Göttingen, Germany
Introduction: Language processing in the adult neurotypical brain is subserved by several white-matter pathways which connect inferior frontal, temporal, and parietal language-relevant cortical regions. Here, we used diffusion-weighted MRI to compare the macro- and micro-structural properties of the core and extended language network in a group of deaf signers and a control group hearing non-signers. Methods: This analysis includes diffusion-weighted and structural data from 24 deaf signers (11 male; mean age = 32.25, SD = 8.69) who acquired a sign language already early in life (mean age of acquisition = 1.29, SD = 2.22) and primarily from their deaf parents (N = 19), as well as 24 hearing non-signers (11 male; mean age = 32.75, SD = 8.25). All data were pre-processed using the standardized workflow implemented in QSIPrep (Cieslak et al., 2021) which integrates advanced quality control measures. We then used pyAFQ (Garyfallidis et al., 2014; Kruper et al., 2021; Yeatman et al., 2012) to perform automatic fiber quantification for generating macro- and micro-structural information derived from diffusion data for the core and extended language network and its right-hemispheric homologues (i.e., bilateral arcuate fasciculus, superior longitudinal fasciculus, posterior arcuate fasciculus, inferior fronto-occipital fasciculus, uncinate fasciculus, and the inferior longitudinal fasciculus). The pyAFQ workflow constitutes a standardized multi-step process that produces robust axonal models and tract profiles. Macro-structural differences in language-related pathways between groups were quantified by extracting the cleaned streamline counts and fitting a linear mixed-effects model accounting for possible individual-level variation in R (R Core Team, 2024). Macro-structural asymmetry of language-related pathways in both groups of participants was determined by computing a lateralization index (LI) for streamline counts (S) for every pathway as follows: LI = (Sleft – Sright) / (Sleft + Sright). In addition, we also quantified for every tract micro-structural differences in fractional anisotropy (FA), mean diffusivity (MD), and radial diffusivity (RD) by fitting linear mixed-effects models for every pyAFQ tract segment and determining significant clusters using a permutation test procedure. Results: We observed no statistically significant macro-structural differences between deaf signers and hearing non-signers in the core language network (i.e., left arcuate fasciculus) and any of the language-related white-matter pathways in the left hemisphere, in line with earlier work (Finkl. et al., 2019; Cheng et al., 2019). However, we found a significant difference in the total number of streamlines between groups (higher in the group of deaf signers) for the right posterior arcuate fasciculus connecting parietal and posterior temporal regions. The lateralization analysis indicated similar lateralization profiles for all pathways across groups. We also observed the following micro-structural group differences: Left and right uncinate fasciculus differed in FA and MD respectively. Both right arcuate and right superior longitudinal fasciculus differed in FA as well as RD. Conclusion: Our data indicate the modality-independent and universal nature of the core left-hemispheric white-matter pathways subserving language processing. At the same time, the observed right-hemispheric macro- and micro-structural differences related to deafness and sign language acquisition constitute a neurobiological alteration that potentially reflects modality-specific processing demands of sign languages.
Talk 3: ERP correlates during sentence processing of German Sign Language agreement verbs in deaf early child signers
Janika Stille1, Anne Wienholz1, Annika Herrmann1, Barbara Hänel-Faulhaber1; 1Universität Hamburg
Sign languages use the three-dimensional signing space to realize various functions, for example marking syntactic information. Agreement verbs move between locations in the signing space, that were linked with referents previously. Thus, the starting and end point of agreement verbs may mark the subject and object of an utterance. How deaf children process violations of these syntactic markings has not been investigated yet using event-related brain potentials (ERP). During sign language processing of semantically or syntactically anomalous sentences, deaf adults demonstrate similar neural correlates as hearing adults for spoken languages (e.g., Capek et al., 2009; Hänel-Faulhaber et al., 2014). Whether or not similar neural correlates are present in deaf children using sign language is currently unknown. When hearing spoken language, hearing children show an N400 for semantic violations and a P600 for morphosyntactic violations compared to correct sentences, like adults. However, children typically show a later and larger N400 and a later and smaller P600 than adults (e.g., Hahne et al., 2004; Schneider & Maguire, 2019). We expected that deaf children would show comparable neural correlates with an N400 for semantic violations and a P600 for morphosyntactic violations. This ERP study investigated the neural processes engaged during sign language processing of different verb types in deaf children who were exposed to German Sign Language (DGS) before the age of three years, i.e., early signers. Here, we focus on agreement verbs, a subset of the experiment. Twenty-six children with DGS as their first language (mean AoA = 10 months [0-30 months]), mean age = 9;11 years [8;7-11;11 years]) were presented with videos of 45 signed DGS sentences containing agreement verbs. The sentences were grammatically correct or contained either semantic (implausible object) or morphosyntactic violations (incorrect direction of movement). A probe verification task was used to keep the children engaged during the study. ERPs were recorded from 32 scalp electrodes. We analyzed mean amplitudes from the point in time when the handshape of the verb sign was fully recognizable (target handshape) until 1000ms in 100ms time windows using generalized additive mixed effects models. The data in each time window was modeled as a function of electrode position and experimental condition. For semantic violations compared to correct sentences, we observed a higher negativity at central-posterior locations that was particularly pronounced between 500-600ms. For morphosyntactic violations compared to correct sentences, we detected a central posterior distributed negativity between 600-700ms comparable to the effect for the semantic violations. Thus, deaf early child signers demonstrated an N400 effect for sentences containing semantic violations in a time window later than in adults, as previously shown for hearing children. However, the effects for morphosyntactic violations were somewhat unexpected suggesting that deaf children process this violation more like a semantic violation. Some sign language studies have reported comparable results for specific agreement violations in deaf adults (Capek et al., 2009; Hosemann et al., 2018). We will discuss other potential explanations including task effects or handling thematic role assignment (Brouwer & Crocker, 2017; Kos et al., 2010).