Syntactic Parallelism Between Music and Language in Relation to Key Memorization
Presenter Name:Laura E., Street
Co-Authors:Joanna Spyra, Dr. Matthew Woolhouse
Both music and language exist as intricate and ‘meaningful’ auditory sequences, specific to human communication (Patel, 2008). Within modern cognitive science, the two have often been compared through an exploration of their perceived syntactic parallelisms (e.g. Woolhouse, Cross, & Horton, 2016). Our study builds on this notion, examining whether the recognized grammatical similarities between music and language are mutually exclusive in the memory domain. Language-related effects will be tested using congruent and incongruent sentences with embedded clauses (sentences that are semantically and syntactically correct or incorrect). Mimicking an embedded clause, the music component of the stimuli will be made of three musical keys: a musically congruent condition will feature matching keys in the outer sections (ABA formatting), whereas an incongruent condition will move through three different keys (CBA formatting). In a 2×2 design, musical and linguistic congruence will be paired as follows: (1) congruent music with congruent language; (2) congruent music with incongruent language; (3) incongruent music with congruent language; and (4) incongruent music with incongruent language. Each music-language sequence will finish with a probe cadence, which will be rated by participants for “goodness of musical completion.” If analyses show high levels of completion for musical stimuli paired with congruent language, but not for incongruent language, then semantic and syntactic language could positively influence musical key memorization. Results may support theories that music and language have syntactic parallels, share neural resources, and, when paired as in song, can enhance the memorization of temporally nonadjacent keys or tonal areas.