Whose Emotion Matters? Speaker Detection without Prior Knowledge

by   Hugo Carneiro, et al.

The task of emotion recognition in conversations (ERC) benefits from the availability of multiple modalities, as offered, for example, in the video-based MELD dataset. However, only a few research approaches use both acoustic and visual information from the MELD videos. There are two reasons for this: First, label-to-video alignments in MELD are noisy, making those videos an unreliable source of emotional speech data. Second, conversations can involve several people in the same scene, which requires the detection of the person speaking the utterance. In this paper we demonstrate that by using recent automatic speech recognition and active speaker detection models, we are able to realign the videos of MELD, and capture the facial expressions from uttering speakers in 96.92 with a self-supervised voice recognition model indicate that the realigned MELD videos more closely match the corresponding utterances offered in the dataset. Finally, we devise a model for emotion recognition in conversations trained on the face and audio information of the MELD realigned videos, which outperforms state-of-the-art models for ERC based on vision alone. This indicates that active speaker detection is indeed effective for extracting facial expressions from the uttering speakers, and that faces provide more informative visual cues than the visual features state-of-the-art models have been using so far.


page 3

page 4

page 10


Integrating Emotion Recognition with Speech Recognition and Speaker Diarisation for Conversations

Although automatic emotion recognition (AER) has recently drawn signific...

Invertable Frowns: Video-to-Video Facial Emotion Translation

We present Wav2Lip-Emotion, a video-to-video translation architecture th...

Hi,KIA: A Speech Emotion Recognition Dataset for Wake-Up Words

Wake-up words (WUW) is a short sentence used to activate a speech recogn...

Best of Both Worlds: Multi-task Audio-Visual Automatic Speech Recognition and Active Speaker Detection

Under noisy conditions, automatic speech recognition (ASR) can greatly b...

Privacy against Real-Time Speech Emotion Detection via Acoustic Adversarial Evasion of Machine Learning

Emotional Surveillance is an emerging area with wide-reaching privacy co...

Exploring the Contextual Dynamics of Multimodal Emotion Recognition in Videos

Emotional expressions form a key part of user behavior on today's digita...

Speech2Face: Learning the Face Behind a Voice

How much can we infer about a person's looks from the way they speak? In...

Please sign up or login with your details

Forgot password? Click here to reset