Poster Presentation

Search Abstracts | Symposia | Slide Sessions | Poster Sessions

Formation of Hierarchical Neural Representations in Auditory Category Learning

Poster Session C, Saturday, September 13, 11:00 am - 12:30 pm, Field House

Chen Hong1, Gangyi Feng1; 1The Chinese University of Hong Kong

Effective speech and music perception rely on the ability to transform continuously changing acoustic signals into meaningful auditory categories. Listeners can achieve this by learning to integrate various acoustic cues based on their immediate reliability and relevance while abstracting them to create generalizable sound-category mappings. At the neural level, learning a novel auditory category may be supported by forming gradient hierarchical representations that derive from integrating multiple acoustic representations in auditory pathways. To test this hypothesis and further examine how the acoustic cues are represented in the brain over learning to form new category representations, we asked 34 adults to complete a magnetoencephalography (MEG) experiment while they learned to categorize novel speech-like sounds into two categories with corrective feedback. These sounds differed in temporal and spectral modulation, falling into two categories divided by a boundary in the acoustic space. Successful categorization requires the integration of the two acoustic cues. We found that behavioral categorization accuracy increased significantly across blocks, indicating efficient category learning. We generated six representation models, including three acoustic models (temporal modulation, spectral modulation, and spectro-temporal similarity) and three higher-level category-relevant models (category-boundary distance, binary category, and behavioral accuracy). We found robust encoding of all three acoustic models in the brain from ~100 ms, with temporal modulation emerging ~200 ms after onset across all training sessions. Category-relevant (i.e., boundary distance and category) and behavior decision representations emerged only in the later learning sessions, and these representations were maximal over left frontotemporal regions ranging from 250 to 500 ms. The higher-level representations correlate closely with the acoustic representations (i.e., spectro-temporal similarity model), computed by integrating their information. These findings reveal a two-layer structure of category learning: early, cue-specific encoding followed by the emergence of category-level and behaviorally decision codes, consistent with our gradient abstracting hypothesis of category learning. Our results provide neural evidence that auditory category learning is driven by progressive integration and abstraction of multiple acoustic cues. These findings refine models of human auditory cognition and highlight encoding modeling as a powerful tool for linking neurodynamics to the category learning process.

Topic Areas: Language Development/Acquisition, Computational Approaches

SNL Account Login


Forgot Password?
Create an Account