Semi-supervised Deep Generative Modelling of Incomplete Multi-Modality Emotional Data

by   Changde Du, et al.

There are threefold challenges in emotion recognition. First, it is difficult to recognize human's emotional states only considering a single modality. Second, it is expensive to manually annotate the emotional data. Third, emotional data often suffers from missing modalities due to unforeseeable sensor malfunction or configuration issues. In this paper, we address all these problems under a novel multi-view deep generative framework. Specifically, we propose to model the statistical relationships of multi-modality emotional data using multiple modality-specific generative networks with a shared latent space. By imposing a Gaussian mixture assumption on the posterior approximation of the shared latent variables, our framework can learn the joint deep representation from multiple modalities and evaluate the importance of each modality simultaneously. To solve the labeled-data-scarcity problem, we extend our multi-view model to semi-supervised learning scenario by casting the semi-supervised classification problem as a specialized missing data imputation task. To address the missing-modality problem, we further extend our semi-supervised multi-view model to deal with incomplete data, where a missing view is treated as a latent variable and integrated out during inference. This way, the proposed overall framework can utilize all available (both labeled and unlabeled, as well as both complete and incomplete) data to improve its generalization ability. The experiments conducted on two real multi-modal emotion datasets demonstrated the superiority of our framework.


Semi-supervised Bayesian Deep Multi-modal Emotion Recognition

In emotion recognition, it is difficult to recognize human's emotional s...

TSK Fuzzy System Towards Few Labeled Incomplete Multi-View Data Classification

Data collected by multiple methods or from multiple sources is called mu...

Semi-supervised Multi-modal Emotion Recognition with Cross-Modal Distribution Matching

Automatic emotion recognition is an active research topic with wide rang...

TCGM: An Information-Theoretic Framework for Semi-Supervised Multi-Modality Learning

Fusing data from multiple modalities provides more information to train ...

Deep Generative Imputation Model for Missing Not At Random Data

Data analysis usually suffers from the Missing Not At Random (MNAR) prob...

Clustering-Induced Generative Incomplete Image-Text Clustering (CIGIT-C)

The target of image-text clustering (ITC) is to find correct clusters by...

Deep Multi-View Learning via Task-Optimal CCA

Canonical Correlation Analysis (CCA) is widely used for multimodal data ...

Please sign up or login with your details

Forgot password? Click here to reset