Poster Presentation

Search Abstracts | Symposia | Slide Sessions | Poster Sessions

Untangling musical and linguistic cognition using machine learning on low-resolution EEG data

Poster Session A, Friday, September 12, 11:00 am - 12:30 pm, Field House
This poster is part of the Sandbox Series.

Samantha Wray1, Lily Arrom1; 1Dartmouth College

Music and language are universal, with no known human communities lacking either. Cognition of both stems from a shared foundation of pattern recognition processes and utilizes processing of many of the same acoustic features, such as pitch, prosody, timing, but research also shows resources for processing them are ultimately separable, with distinct neural responses for unexpected or syntactically invalid items in both music and language (Patel 2003, Koelsch et al. 2005). However, like so much of psycholinguistic research, existing research can be limited in its scope to WEIRD societies (Bylund 2022), missing structurally more complex musical systems or rare practices such musical speech surrogacy. The current project aims to validate usage of a high-portability, low-resolution, commercial-grade EEG system – a 14 sensor EEG system (Emotiv Epoc X) – well-suited to field sites that allow the study of both language and music outside of lab settings to expand the scope of current research. While previous work has established feasibility of high-portability EEG systems for coarse-grained sensory processing (Williams et al 2020), the current work expands this into finer-grain domains. Crucially, EEG recordings – especially from high-portability low-resolution 14 sensor systems – are plagued by a poor signal-to-noise ratio, even more so than those present in higher resolution systems. Computational approaches may address this weakness. N=25 (at time of submission, half of projected N) right-handed participants (age M=20.5 SD=2) read sentences presented in rapid serial visual presentation while a tonal sequence played simultaneously. The sentences varied across three conditions: grammatical, semantically implausible, and syntactically ungrammatical. Tonal sequences varied across four: regular tonic sequences and music-syntactically irregular final notes (Neapolitan), with both having the possibility of an instrument change partway through (stimuli structure largely following Koelsch et al. 2005). Two tasks ensured participant attention: an occasional grammaticality judgement, and monitoring for the instrument change. A simple ERP analysis with data epoched from −100 ms prior to the onset of the last tone/word and to 700 ms post onset and bandpass filtered from 0.1 to 30 Hz (processing implemented in EEGlab (Delorme & Makeig 2004)) showed separable components for semantic and syntactic incongruencies as well as irregular tone sequences, with no visible effect of instrument change, but with appreciable cross-subject variation in component time course and polarity. Work in progress focuses on comparing multiple machine learning approaches with the traditional ERP analysis to boost the signal and reduce noise. This includes comparing decoding algorithms that have seen success with EEG data before, including convolutional neural networks (Schirrmeister et al. 2017, Song et al. 2022) and support vector machines (Saeidi et al. 2021), which are investigated alongside cebra.ai (Schneider & Mathis 2023) which uses self-supervised learning to create embeddings and map neural space to decode neural activity. From the current work we aim to learn how to best analyze data from low-signal high-noise equipment that may be well-suited to field sites that allow us to study and document linguistic and musical practices outside of lab settings and address data sparsity in commercial systems.

Topic Areas: Syntax and Combinatorial Semantics, Computational Approaches

SNL Account Login


Forgot Password?
Create an Account