Adversarial Variational Embedding for Robust Semi-supervised Learning

by   Xiang Zhang, et al.

Semi-supervised learning is sought for leveraging the unlabelled data when labelled data is difficult or expensive to acquire. Deep generative models (e.g., Variational Autoencoder (VAE)) and semisupervised Generative Adversarial Networks (GANs) have recently shown promising performance in semi-supervised classification for the excellent discriminative representing ability. However, the latent code learned by the traditional VAE is not exclusive (repeatable) for a specific input sample, which prevents it from excellent classification performance. In particular, the learned latent representation depends on a non-exclusive component which is stochastically sampled from the prior distribution. Moreover, the semi-supervised GAN models generate data from pre-defined distribution (e.g., Gaussian noises) which is independent of the input data distribution and may obstruct the convergence and is difficult to control the distribution of the generated data. To address the aforementioned issues, we propose a novel Adversarial Variational Embedding (AVAE) framework for robust and effective semi-supervised learning to leverage both the advantage of GAN as a high quality generative model and VAE as a posterior distribution learner. The proposed approach first produces an exclusive latent code by the model which we call VAE++, and meanwhile, provides a meaningful prior distribution for the generator of GAN. The proposed approach is evaluated over four different real-world applications and we show that our method outperforms the state-of-the-art models, which confirms that the combination of VAE++ and GAN can provide significant improvements in semisupervised classification.


EC-GAN: Low-Sample Classification using Semi-Supervised Algorithms and GANs

Semi-supervised learning has been gaining attention as it allows for per...

Semi-supervised Target-level Sentiment Analysis via Variational Autoencoder

Target-level aspect-based sentiment analysis (TABSA) is a long-standing ...

Progressive VAE Training on Highly Sparse and Imbalanced Data

In this paper, we present a novel approach for training a Variational Au...

Doubly Stochastic Adversarial Autoencoder

Any autoencoder network can be turned into a generative model by imposin...

VAE Approximation Error: ELBO and Conditional Independence

The importance of Variational Autoencoders reaches far beyond standalone...

Semi-supervised Learning of Galaxy Morphology using Equivariant Transformer Variational Autoencoders

The growth in the number of galaxy images is much faster than the speed ...

Please sign up or login with your details

Forgot password? Click here to reset