Analyzing Algorithmic Predictions of Emotion
Presenter Name:"Jackie" Zhi Qi Zhou
School/Affiliation:McMaster University (MAPLE lab)
Co-Authors:Cameron J. Anderson, & Michael Schutz
Music emotion recognition is an evolving area of research that aims to predict emotion in musical recordings using algorithms. Accurately identifying emotion in musical works requires expertise in music theory, auditory and emotion perception, signal processing, and machine learning (Kim et. al., 2010). Although music information retrieval (MIR) algorithms are becoming more widely used, little research explores the efficacy of the cues used to predict emotion in music. To assess the accuracy and consistency of MIR predictions, we compare the findings and predictions of algorithms in MIRToolbox to analyses of the same cues from score, recordings, and MIDI renditions of widely analyzed musical works. Specifically, we analyze musical encodings from scores alongside acoustic cues (pitch, timing, and loudness) from four different performers’ commercially recorded interpretations of the same pieces. Using formal music analyses of Chopin’s 24 Preludes (Op. 28, 1839) as a ground truth data set, we can evaluate the consistency of algorithmic prediction for interpretations that differ in performance cues but preserve compositional cues. We predicted consistencies in compositional cues (pitch and modality), but more variability in performance cues (loudness and timing) due to differences in performers’ interpretations affecting algorithmic predictions. Preliminary findings reveal unexpected uniformity in loudness predictions; however, pitch, timing, and modality information comparisons show unusual inconsistencies with the formal analyses. Further exploration applying MIR algorithms to these musical works will shed additional light on the cues most predictive of perceived emotion in music.