Poster Presentation

Search Abstracts | Symposia | Slide Sessions | Poster Sessions

Neural Decoding of Autobiographical Visual Imagery cued by Text

Poster A32 in Poster Session A, Friday, September 12, 11:00 am - 12:30 pm, Field House

Andrew Anderson1; 1Medical College of Wisconsin

The human brain’s ability to mentally visualize autobiographical experiences has been linked to the Medial Temporal subsystem of the Default Mode Network (MT-DMN). In contrast, Frontotemporal-DMN has been proposed to support more abstract verbal, semantic and social cognition, and Core-DMN has been associated with self-referential cognition. Whilst broad patterns of activation and deactivation within DMN subsystems have been thoroughly investigated by contrasting brain responses to different neurocognitive tasks, and MT-DMN has been observed to activate during visuospatial imagery and pictorial semantic tasks, the representational codes underlying the mental visualization of autobiographical scenes have been understudied. This is in large part because deriving computational models to use to identify idiosyncratic visual components of self-generated mental images in functional brain scans is challenging. To target this question, we scanned fifty peoples’ brain activity with fMRI as they imagined their personal experience of twenty natural scenarios when presented with generic written scenario cues (e.g. wedding/funeral/driving). Visual imagery was modeled using: (1) Image-generation AI models to depict participants’ verbal descriptions of their mental images made outside the scanner, and (2) Image-recognition models to re-represent synthetic images as abstract visual representations, that have been applied to model neural responses in high-level visual perception and the imagination of visual objects. Non-visual autobiographical semantics associated with participants’ mental image descriptions was modeled using the large language model GPT-2. A Representational Similarity Analysis revealed that MT-DMN reflected participant-specific visual model structure when controlling for GPT-2 features, and this effect was not present in FT-DMN or Core-DMN. MT-DMN also reflected GPT-2 information structure when controlling for the visual model, however greater/equivalent effects were observed in Core- and FT-DMN respectively. To further evaluate whether MT-DMN visual correlates were a product of active autobiographical imagery, the same the same modeling and RSA approaches were deployed to test a separate fMRI activation dataset scanned as fourteen different participants read imageable third person sentences in absence of an active mental imagery task. In absence of active imagery, the visual effect in MT-DMN was also absent, however MT-, FT- and Core-DMN all reflected GPT-2 relatively strongly. This finding helps to characterize the neural bases of autobiographical mental imagery, and sentence reading and identifies image AI models as valuable tools for exposing the neural correlates of autobiographical visual imagery.

Topic Areas: Meaning: Lexical Semantics,

SNL Account Login


Forgot Password?
Create an Account