Search Abstracts | Symposia | Slide Sessions | Poster Sessions
Hey ChatGPT, what are the neural bases for integrating the meaning of single words into sentences?
Poster Session C, Saturday, September 13, 11:00 am - 12:30 pm, Field House
Andrea Belluzzi1, Scott L. Fairhall1; 1University of Trento
Introduction - Neural representations of single concepts have been extensively studied for years, identifying a left-lateralised network encompassing ATL, AG, medial PFC, precuneus, IFG, VTC, pMTG: the semantic system. However, less is known about the neural basis for combining single concepts into increasingly complex representations, like sentences. To investigate this combinatorial process, we leveraged sentence embeddings derived from large language models (LLMs), which capture the overall meaning of sentences, to examine how this process is distributed across the semantic system. Methods and Results - We reanalysed an fMRI study (N=24) where participants read 240 Italian sentences formed by a subject, a verb, and a complement (e.g. “The cops arrest the thieves”). To isolate sentence-level meaning from that of the individual words, sentence embeddings (SONAR by Meta) were used to build (a) a model of sentences presented in their original order and (b) a model derived from sentences with scrambled word order. Through representational similarity analysis (RSA), we contrasted the ordered and scrambled (a > b) models, revealing that sentence-level meaning is represented uniformly across the whole semantic system. While this manipulation produced robust regional differences in the magnitude of representational strength, no specific regions showed a greater relevance to combinatorial meaning when the proportional change in captured information was assessed. To further investigate whether different regions within the semantic system support distinct aspects of the combinatorial process, we focused on the roles played by nouns. Specifically, we constructed two “impoverished” models where sentence meaning is impacted by removing one sentence component: (c) the subject (“The arrest the thieves”), or (d) the final noun (“The cops arrest the”). We found that the removal of the subject (c > d) more strongly affected sentence-level meaning in core areas of the semantic system: the precuneus, left ATL, and vmPFC. In contrast, final noun removal (d > c) more strongly impacted sentence meaning in regions processing contextual associations: right PPA and RSC. Conclusion - The comparison between original and scrambled sentences suggests that the semantic system is uniformly engaged in the integration of single words into sentence meaning. However, while focusing on different sentence components, differences were observed in dissociable brain regions. Core regions of the semantic system provide the foundation of the meaning for the sentence, while regions known for processing contextual associations are integrating the new piece of information with the rest of the sentence. This study shows an innovative way to use LLMs, specifically sentence embeddings, by altering aspects of the original sentences (like word order or specific components) to investigate different linguistic and semantic processes.
Topic Areas: Syntax and Combinatorial Semantics, Computational Approaches