Teaching a GAN What Not to Learn

by   Siddarth Asokan, et al.

Generative adversarial networks (GANs) were originally envisioned as unsupervised generative models that learn to follow a target distribution. Variants such as conditional GANs, auxiliary-classifier GANs (ACGANs) project GANs on to supervised and semi-supervised learning frameworks by providing labelled data and using multi-class discriminators. In this paper, we approach the supervised GAN problem from a different perspective, one that is motivated by the philosophy of the famous Persian poet Rumi who said, "The art of knowing is knowing what to ignore." In the GAN framework, we not only provide the GAN positive data that it must learn to model, but also present it with so-called negative samples that it must learn to avoid - we call this "The Rumi Framework." This formulation allows the discriminator to represent the underlying target distribution better by learning to penalize generated samples that are undesirable - we show that this capability accelerates the learning process of the generator. We present a reformulation of the standard GAN (SGAN) and least-squares GAN (LSGAN) within the Rumi setting. The advantage of the reformulation is demonstrated by means of experiments conducted on MNIST, Fashion MNIST, CelebA, and CIFAR-10 datasets. Finally, we consider an application of the proposed formulation to address the important problem of learning an under-represented class in an unbalanced dataset. The Rumi approach results in substantially lower FID scores than the standard GAN frameworks while possessing better generalization capability.


page 6

page 9

page 16

page 17

page 20

page 21

page 22


Multi-class Generative Adversarial Nets for Semi-supervised Image Classification

From generating never-before-seen images to domain adaptation, applicati...

Bayesian GAN

Generative adversarial networks (GANs) can implicitly learn rich distrib...

S2cGAN: Semi-Supervised Training of Conditional GANs with Fewer Labels

Generative adversarial networks (GANs) have been remarkably successful i...

Self-supervised GAN: Analysis and Improvement with Multi-class Minimax Game

Self-supervised (SS) learning is a powerful approach for representation ...

Unbiased Auxiliary Classifier GANs with MINE

Auxiliary Classifier GANs (AC-GANs) are widely used conditional generati...

Effective Data Augmentation with Multi-Domain Learning GANs

For deep learning applications, the massive data development (e.g., coll...

Code Repositories


Code accompanying the NeurIPS 2020 submission "Teaching a GAN What Not to Learn."

view repo


"Teaching A GAN What Not to Learn"

view repo

Please sign up or login with your details

Forgot password? Click here to reset