Learning Inverse Mappings with Adversarial Criterion

02/13/2018
by   Jiyi Zhang, et al.
0

We propose a flipped-Adversarial AutoEncoder (FAAE) that simultaneously trains a generative model G that maps an arbitrary latent code distribution to a data distribution and an encoder E that embodies an "inverse mapping" that encodes a data sample into a latent code vector. Unlike previous hybrid approaches that leverage adversarial training criterion in constructing autoencoders, FAAE minimizes reencoding errors in the la- tent space and exploit adversarial criterion in the data space. Experimental evaluations demonstrate that the proposed frameworks produces sharper reconstructed image t and while at the same time enabling inference that captures rich semantic representation of data.

READ FULL TEXT

page 5

page 6

page 7

research
03/03/2017

Denoising Adversarial Autoencoders

Unsupervised learning is of growing interest because it unlocks the pote...
research
11/18/2015

Adversarial Autoencoders

In this paper, we propose the "adversarial autoencoder" (AAE), which is ...
research
04/07/2017

It Takes (Only) Two: Adversarial Generator-Encoder Networks

We present a new autoencoder-type architecture that is trainable in an u...
research
06/13/2017

Adversarially Regularized Autoencoders

While autoencoders are a key technique in representation learning for co...
research
02/12/2019

Density Estimation and Incremental Learning of Latent Vector for Generative Autoencoders

In this paper, we treat the image generation task using the autoencoder,...
research
09/10/2019

Learning Priors for Adversarial Autoencoders

Most deep latent factor models choose simple priors for simplicity, trac...
research
02/22/2018

Sounderfeit: Cloning a Physical Model with Conditional Adversarial Autoencoders

An adversarial autoencoder conditioned on known parameters of a physical...

Please sign up or login with your details

Forgot password? Click here to reset