InfoCatVAE: Representation Learning with Categorical Variational Autoencoders

06/20/2018
by   Edouard Pineau, et al.
24

This paper describes InfoCatVAE, an extension of the variational autoencoder that enables unsupervised disentangled representation learning. InfoCatVAE uses multimodal distributions for the prior and the inference network and then maximizes the evidence lower bound objective (ELBO). We connect the new ELBO derived for our model with a natural soft clustering objective which explains the robustness of our approach. We then adapt the InfoGANs method to our setting in order to maximize the mutual information between the categorical code and the generated inputs and obtain an improved model.

READ FULL TEXT

page 7

page 8

research
10/22/2021

Contrastively Disentangled Sequential Variational Autoencoder

Self-supervised disentangled representation learning is a critical task ...
research
04/18/2019

Disentangled Representation Learning with Information Maximizing Autoencoder

Learning disentangled representation from any unlabelled data is a non-t...
research
09/25/2020

Hierarchical Sparse Variational Autoencoder for Text Encoding

In this paper we focus on unsupervised representation learning and propo...
research
06/30/2022

Laplacian Autoencoders for Learning Stochastic Representations

Established methods for unsupervised representation learning such as var...
research
11/09/2018

Evidence Transfer for Improving Clustering Tasks Using External Categorical Evidence

In this paper we introduce evidence transfer for clustering, a deep lear...
research
05/25/2019

The variational infomax autoencoder

We propose the Variational InfoMax AutoEncoder (VIMAE), a method to trai...
research
06/07/2017

InfoVAE: Information Maximizing Variational Autoencoders

It has been previously observed that variational autoencoders tend to ig...

Please sign up or login with your details

Forgot password? Click here to reset