Top-K Training of GANs: Improving Generators by Making Critics Less Critical

by   Samarth Sinha, et al.

We introduce a simple (one line of code) modification to the Generative Adversarial Network (GAN) training algorithm that materially improves results with no increase in computational cost: When updating the generator parameters, we simply zero out the gradient contributions from the elements of the batch that the critic scores as `least realistic'. Through experiments on many different GAN variants, we show that this `top-k update' procedure is a generally applicable improvement. In order to understand the nature of the improvement, we conduct extensive analysis on a simple mixture-of-Gaussians dataset and discover several interesting phenomena. Among these is that, when gradient updates are computed using the worst-scoring batch elements, samples can actually be pushed further away from the their nearest mode.


Multi-Generator Generative Adversarial Nets

We propose a new approach to train the Generative Adversarial Nets (GANs...

HGAN: Hybrid Generative Adversarial Network

In this paper, we present a simple approach to train Generative Adversar...

Prb-GAN: A Probabilistic Framework for GAN Modelling

Generative adversarial networks (GANs) are very popular to generate real...

Which Training Methods for GANs do actually Converge?

Recent work has shown local convergence of GAN training for absolutely c...

MMCGAN: Generative Adversarial Network with Explicit Manifold Prior

Generative Adversarial Network(GAN) provides a good generative framework...

Sample weighting as an explanation for mode collapse in generative adversarial networks

Generative adversarial networks were introduced with a logistic MiniMax ...

Geometric GAN

Generative Adversarial Nets (GANs) represent an important milestone for ...

Code Repositories

Please sign up or login with your details

Forgot password? Click here to reset