Skip to main content

Decoding attention in polyphonic music listening

Decoding attention in polyphonic music listening

Presenter Name:Lucas Klein

School/Affiliation:McMaster University

Co-Authors:Aedan Rourke, Hany Tawfik, Dan Bosnyak, Laurel J. Trainor


The human auditory system can identify and distinguish individual sound sources based on the perceptual organization of acoustic features, and neural responses encode spectrotemporal characteristics of sound envelopes. A growing body of electrophysiological studies allude to the role top-down processes in auditory stream segregation; the auditory cortex encodes features of attended sounds to a higher degree than unattended sounds. This allows the target of auditory selective attention to be decoded from single-trial EEG data.

A decoding approach called stimulus reconstruction—which learns linear mappings from EEG responses to presented auditory stimuli—has been used to determine which of two simultaneous competing speech streams is being attended. Reconstructions of attended sound streams are more accurate than reconstructions of unattended streams. However, whereas multiple speech streams compete for a listener’s attention, a goal of polyphonic music is the integration (coordination) of multiple distinct sounds (notes or instruments) to reveal musical elements, such as harmony. Little is known about the attentional mechanisms involved in listening to mixtures of sounds in polyphonic music, which are both differentiable and integrable.

To investigate attention in polyphonic music listening, we asked listeners to attend to either the high part, low part or both parts together of 25-s clips from Bach’s two-part inventions played in distinct timbres while collecting EEG. Data analysis is ongoing, but we expect the envelope of the attended part or mixture to be more highly correlated with the corresponding stimulus reconstructions.

Poster PDFPoster PDF