Smoothing the Generative Latent Space with Mixup-based Distance Learning

by   Chaerin Kong, et al.

Producing diverse and realistic images with generative models such as GANs typically requires large scale training with vast amount of images. GANs trained with extremely limited data can easily overfit to few training samples and display undesirable properties like "stairlike" latent space where transitions in latent space suffer from discontinuity, occasionally yielding abrupt changes in outputs. In this work, we consider the situation where neither large scale dataset of our interest nor transferable source dataset is available, and seek to train existing generative models with minimal overfitting and mode collapse. We propose latent mixup-based distance regularization on the feature space of both a generator and the counterpart discriminator that encourages the two players to reason not only about the scarce observed data points but the relative distances in the feature space they reside. Qualitative and quantitative evaluation on diverse datasets demonstrates that our method is generally applicable to existing models to enhance both fidelity and diversity under the constraint of limited data. Code will be made public.


page 5

page 6

page 12

page 13

page 14

page 15

page 16

page 17


Latent Space is Feature Space: Regularization Term for GANs Training on Limited Dataset

Generative Adversarial Networks (GAN) is currently widely used as an uns...

Large Scale Qualitative Evaluation of Generative Image Model Outputs

Evaluating generative image models remains a difficult problem. This is ...

Using Deep LSD to build operators in GANs latent space with meaning in real space

Generative models rely on the key idea that data can be represented in t...

Generative Models as a Data Source for Multiview Representation Learning

Generative models are now capable of producing highly realistic images t...

Learning from Invalid Data: On Constraint Satisfaction in Generative Models

Generative models have demonstrated impressive results in vision, langua...

Conditional Recurrent Flow: Conditional Generation of Longitudinal Samples with Applications to Neuroimaging

Generative models using neural network have opened a door to large-scale...

Sampling From Autoencoders' Latent Space via Quantization And Probability Mass Function Concepts

In this study, we focus on sampling from the latent space of generative ...

Please sign up or login with your details

Forgot password? Click here to reset