Skip to main content

P2-9 Time-resolved acoustics and selective attention in speech stream separation: an EEG decoding study

P2-9 Time-resolved acoustics and selective attention in speech stream separation: an EEG decoding study

Name:Ellia Baines

School/Affiliation:McMaster University

Co-Authors:Kevin Yang, Shu Sakamoto, Laurel J. Trainor

Virtual or In-person:In-person

Short Bio:

Ellia Baines is a fourth-year undergraduate student at McMaster University in the Honours Biology and Psychology, Neuroscience & Behaviour B.Sc. program. She is currently working on a thesis project in McMaster’s Auditory Development Lab and is interested in the neuroscience and cognition of speech and music perception.

Abstract:

We process overlapping sounds from the environment by grouping them into distinct streams based on perceptual patterns and estimations made by the brain. Bottom-up cues such as pitch, timbre, and frequency contribute to this separation, while selectively attending to a sound object can further enhance its processing. In our previous study, we found that music and speech rely differently on selective attention: separating two incoming speech sounds requires top-down attention and engages slower-latency cortical processing (>250 ms), likely because certain speech pairs provide only small differences in bottom-up cues. Building on this, the present study examines the neural mechanisms underlying the interplay between stream separation and selective attention by analyzing time-resolved acoustic features of the stimuli. Participants heard two spatially segregated speech streams and performed a target-detection task under instructions to attend to one or both streams. Electroencephalography (EEG) decoding was used to assess the onset and strength of stream separation in cortex. Acoustic features for each speaker—including F0 (via Praat), amplitude envelope, and spectro-temporal change—were extracted, and their relationships to the degree of neural separation were analyzed within each attention condition. We test the hypothesis that greater moment-to-moment feature disparity (e.g., F0 divergence) predicts earlier and stronger separation, and that selective attention can compensate when bottom-up separation is weak. Preliminary analyses will be presented in the poster. This study aims to provide clearer insight into auditory stream formation and to clarify how specific acoustic features interact with selective attention to shape the timing of speech stream separation.

Poster PDFPoster PDF