P2-22 Expressive Responses to Native and Non-Native Speech and Song Across Early Infancy
Name:Seyed Amirali, Shafiei Masouleh
School/Affiliation:McMaster University
Co-Authors:Jesse K. Pazdera, Rafael Román-Caballero, Laurel J. Trainor, & Naiqi G. Xiao
Virtual or In-person:In-person
Short Bio:
Im a level 4 PNB mental health student at mcmaster university and I am writing a reserch paper on this research with the help pf my professor D. Pazdera
Abstract:
From the earliest months of life, infants are attuned to the emotional qualities of the sounds
around them. Both speech and song play central roles in shaping social connection and early
communication, yet it remains unclear whether infants respond to these forms of vocal
expression in distinct ways. Understanding how infants react emotionally to language and music
in both native and non-native contexts can shed light on how perception and emotion co-develop.
To address this, we are conducting a two-phase longitudinal study examining how infants’ facial
expressions change over time when they hear speech and song in their native versus non-native
language.
In the first phase, we used an iPhone TrueDepth camera to track facial expressions from 22
infants aged 4 to 6 months as they listened to speech and song in both English and Spanish. Early
results showed greater overall expressiveness during speech than song, with only small and
variable distinctions between languages. Infants tended to open their eyes wider during song and
showed slightly more eye-squinting during speech, suggesting different forms of engagement
with each type of vocal input.
In the next phase, the same infants will be re-tested at 12 months to explore how their responses
change with development. This longitudinal design will allow us to examine developmental
changes in emotional and facial responses as infants gain more exposure to their native language.
We discuss future avenues for using common iOS cameras to study emotional expressions and
movements in music research.