Improving Disentangled Representation Learning with the Beta Bernoulli Process

by   Prashnna Kumar Gyawali, et al.

To improve the ability of VAE to disentangle in the latent space, existing works mostly focus on enforcing independence among the learned latent factors. However, the ability of these models to disentangle often decreases as the complexity of the generative factors increases. In this paper, we investigate the little-explored effect of the modeling capacity of a posterior density on the disentangling ability of the VAE. We note that the independence within and the complexity of the latent density are two different properties we constrain when regularizing the posterior density: while the former promotes the disentangling ability of VAE, the latter -- if overly limited -- creates an unnecessary competition with the data reconstruction objective in VAE. Therefore, if we preserve the independence but allow richer modeling capacity in the posterior density, we will lift this competition and thereby allow improved independence and data reconstruction at the same time. We investigate this theoretical intuition with a VAE that utilizes a non-parametric latent factor model, the Indian Buffet Process (IBP), as a latent density that is able to grow with the complexity of the data. Across three widely-used benchmark data sets and two clinical data sets little explored for disentangled learning, we qualitatively and quantitatively demonstrated the improved disentangling performance of IBP-VAE over the state of the art. In the latter two clinical data sets riddled with complex factors of variations, we further demonstrated that unsupervised disentangling of nuisance factors via IBP-VAE -- when combined with a supervised objective -- can not only improve task accuracy in comparison to relevant supervised deep architectures but also facilitate knowledge discovery related to task decision-making. A shorter version of this work will appear in the ICDM 2019 conference proceedings.


page 1

page 4

page 6

page 8


q-VAE for Disentangled Representation Learning and Latent Dynamical Systems

This paper proposes a novel variational autoencoder (VAE) derived from T...

Understanding disentangling in β-VAE

We present new intuitions and theoretical assessments of the emergence o...

Deep Generative Model with Beta Bernoulli Process for Modeling and Learning Confounding Factors

While deep representation learning has become increasingly capable of se...

Relevance Factor VAE: Learning and Identifying Disentangled Factors

We propose a novel VAE-based deep auto-encoder model that can learn dise...

Improve variational autoEncoder with auxiliary softmax multiclassifier

As a general-purpose generative model architecture, VAE has been widely ...

Hyperprior Induced Unsupervised Disentanglement of Latent Representations

We address the problem of unsupervised disentanglement of latent represe...

Progressive Learning and Disentanglement of Hierarchical Representations

Learning rich representation from data is an important task for deep gen...

Please sign up or login with your details

Forgot password? Click here to reset