MMCGAN: Generative Adversarial Network with Explicit Manifold Prior

06/18/2020
by   Guanhua Zheng, et al.
0

Generative Adversarial Network(GAN) provides a good generative framework to produce realistic samples, but suffers from two recognized issues as mode collapse and unstable training. In this work, we propose to employ explicit manifold learning as prior to alleviate mode collapse and stabilize training of GAN. Since the basic assumption of conventional manifold learning fails in case of sparse and uneven data distribution, we introduce a new target, Minimum Manifold Coding (MMC), for manifold learning to encourage simple and unfolded manifold. In essence, MMC is the general case of the shortest Hamiltonian Path problem and pursues manifold with minimum Riemann volume. Using the standardized code from MMC as prior, GAN is guaranteed to recover a simple and unfolded manifold covering all the training data. Our experiments on both the toy data and real datasets show the effectiveness of MMCGAN in alleviating mode collapse, stabilizing training, and improving the quality of generated samples.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset