Unifying the Discrete and Continuous Emotion labels for Speech Emotion Recognition

by   Roshan Sharma, et al.

Traditionally, in paralinguistic analysis for emotion detection from speech, emotions have been identified with discrete or dimensional (continuous-valued) labels. Accordingly, models that have been proposed for emotion detection use one or the other of these label types. However, psychologists like Russell and Plutchik have proposed theories and models that unite these views, maintaining that these representations have shared and complementary information. This paper is an attempt to validate these viewpoints computationally. To this end, we propose a model to jointly predict continuous and discrete emotional attributes and show how the relationship between these can be utilized to improve the robustness and performance of emotion recognition tasks. Our approach comprises multi-task and hierarchical multi-task learning frameworks that jointly model the relationships between continuous-valued and discrete emotion labels. Experimental results on two widely used datasets (IEMOCAP and MSPPodcast) for speech-based emotion recognition show that our model results in statistically significant improvements in performance over strong baselines with non-unified approaches. We also demonstrate that using one type of label (discrete or continuous-valued) for training improves recognition performance in tasks that use the other type of label. Experimental results and reasoning for this approach (called the mismatched training approach) are also presented.


page 1

page 2

page 3

page 4


Learning Spontaneity to Improve Emotion Recognition In Speech

We investigate the effect and usefulness of spontaneity in speech (i.e. ...

openXDATA: A Tool for Multi-Target Data Generation and Missing Label Completion

A common problem in machine learning is to deal with datasets with disjo...

Attention-Augmented End-to-End Multi-Task Learning for Emotion Prediction from Speech

Despite the increasing research interest in end-to-end learning systems ...

Learning Speech Emotion Representations in the Quaternion Domain

The modeling of human emotion expression in speech signals is an importa...

Multimodal Continuous Emotion Recognition using Deep Multi-Task Learning with Correlation Loss

In this study, we focus on continuous emotion recognition using body mot...

Joint Emotion Label Space Modelling for Affect Lexica

Emotion lexica are commonly used resources to combat data poverty in aut...

Jointly Aligning and Predicting Continuous Emotion Annotations

Time-continuous dimensional descriptions of emotions (e.g., arousal, val...

Please sign up or login with your details

Forgot password? Click here to reset