The sound of silence: Predictive error responses to unexpected sound omission in adults
Presenter Name:David, Prete
Co-Authors:David, Heikoo;, Josh, McGillivray; Jim, Reilly; Laurel, Trainor
Detecting patterns is vital for processing speech and music. Predictive coding theorizes that the brain predicts incoming sounds, compares the predictions to incoming sensory inputs, and generates prediction errors whenever a mismatch between the predictions and sensory input occurs. Predictive coding can be indexed in electroencephalography (EEG) with the mismatch negativity (MMN) and P3a, two event-related potentials (ERP) components elicited by infrequent deviant sounds (e.g., differing in pitch, duration, or loudness) in a stream of frequent sounds. If these components reflect prediction error, omitting an expected sound should also elicit these responses compared to a silence that is expected. However few studies have compared these two types of silences. Thus, we compared ERPs elicited by infrequent, random omissions (unexpected silences) in tone sequences presented at 2 tones/sec to ERPs elicited by frequent, regular omissions (expected silences) within a sequence of tones and to a constant silence (resting state EEG). Unexpected silences elicited significant MMN and P3a compared to constant and expected silences, although the magnitude of these ERP components was quite small and variable. Further exploratory analyses showed that global EEG field power differed more between the expected and unexpected silences during the time of typical MMN (100-300 ms after omission onset) and typical P3a (300-500 ms) responses compared to baseline. Additionally, the scalp distributions of the expected and unexpected silences differed significantly at the time of a typical P3a. These results provide evidence for hierarchical predictive coding, indicating that the brain predicts silences as well as sounds.