Skip to main content

Ryerson Audio-Visual Database of Emotional Speech and Song (RAVDESS): Arousal and Valence Validation

Ryerson Audio-Visual Database of Emotional Speech and Song (RAVDESS): Arousal and Valence Validation

Presenter Name:Karla Kovacek

School/Affiliation:Ryerson University

Co-Authors:Steven Livingstone, Gurjit Singh, Frank A. Russo

Abstract:

Discrete and dimensional models for classifying emotions have been used extensively in the vocal emotion literature. Discrete models group emotions into distinct categories (i.e. happy, sad, fearful, angry). In contrast, dimensional models organize emotions along continuums within an n-dimensional space. The most widely used dimensional model of emotion is the circumplex model of affect, organizing emotions along a vertical dimension of arousal (calm to excited), and a horizontal dimension of valence (unpleasant to pleasant). The Ryerson Audio-Visual Database of Emotional Speech and Song (RAVDESS) has become a widely used tool in psychological and affective computing studies of categorical emotion of speech and song. The aim of the current study is to extend the use of RAVDESS by validating the speech stimuli using the circumplex model of affect. This study recruited 24 young adults to rate 4,320 audio, visual, and audio-visual speech stimuli on dimensions of arousal and valence using two Self-Assessment Manikin (SAM) Likert scales. Preliminary data shows expected trends in arousal and valence ratings; happy and surprised stimuli are rated high in arousal and valence, angry, fearful and disgusted stimuli are rated high in arousal but low in valence, sad stimuli are rated low on valence and arousal; calm stimuli are rated low in arousal and moderate in valence. This normative data will allow researchers to appropriately select RAVDESS stimuli to study emotional speech through a dimensional model.

Poster PDFPoster PDF Meeting LinkMeeting Link