PERI: Part Aware Emotion Recognition In The Wild

10/18/2022
by   Akshita Mittel, et al.
2

Emotion recognition aims to interpret the emotional states of a person based on various inputs including audio, visual, and textual cues. This paper focuses on emotion recognition using visual features. To leverage the correlation between facial expression and the emotional state of a person, pioneering methods rely primarily on facial features. However, facial features are often unreliable in natural unconstrained scenarios, such as in crowded scenes, as the face lacks pixel resolution and contains artifacts due to occlusion and blur. To address this, in the wild emotion recognition exploits full-body person crops as well as the surrounding scene context. In a bid to use body pose for emotion recognition, such methods fail to realize the potential that facial expressions, when available, offer. Thus, the aim of this paper is two-fold. First, we demonstrate our method, PERI, to leverage both body pose and facial landmarks. We create part aware spatial (PAS) images by extracting key regions from the input image using a mask generated from both body pose and facial landmarks. This allows us to exploit body pose in addition to facial context whenever available. Second, to reason from the PAS images, we introduce context infusion (Cont-In) blocks. These blocks attend to part-specific information, and pass them onto the intermediate features of an emotion recognition network. Our approach is conceptually simple and can be applied to any existing emotion recognition method. We provide our results on the publicly available in the wild EMOTIC dataset. Compared to existing methods, PERI achieves superior performance and leads to significant improvements in the mAP of emotion categories, while decreasing Valence, Arousal and Dominance errors. Importantly, we observe that our method improves performance in both images with fully visible faces as well as in images with occluded or blurred faces.

READ FULL TEXT

page 7

page 14

research
03/30/2016

Exploiting Facial Landmarks for Emotion Recognition in the Wild

In this paper, we describe an entry to the third Emotion Recognition in ...
research
08/01/2023

Using Scene and Semantic Features for Multi-modal Emotion Recognition

Automatic emotion recognition is a hot topic with a wide range of applic...
research
02/20/2023

Medical Face Masks and Emotion Recognition from the Body: Insights from a Deep Learning Perspective

The COVID-19 pandemic has undoubtedly changed the standards and affected...
research
03/30/2020

Context Based Emotion Recognition using EMOTIC Dataset

In our everyday lives and social interactions we often try to perceive t...
research
03/24/2021

Affective Processes: stochastic modelling of temporal context for emotion and facial expression recognition

Temporal context is key to the recognition of expressions of emotion. Ex...
research
07/07/2021

An audiovisual and contextual approach for categorical and continuous emotion recognition in-the-wild

In this work we tackle the task of video-based audio-visual emotion reco...
research
04/24/2022

EMOCA: Emotion Driven Monocular Face Capture and Animation

As 3D facial avatars become more widely used for communication, it is cr...

Please sign up or login with your details

Forgot password? Click here to reset