Teaching a GAN What Not to Learn

10/29/2020
by   Siddarth Asokan, et al.
67

Generative adversarial networks (GANs) were originally envisioned as unsupervised generative models that learn to follow a target distribution. Variants such as conditional GANs, auxiliary-classifier GANs (ACGANs) project GANs on to supervised and semi-supervised learning frameworks by providing labelled data and using multi-class discriminators. In this paper, we approach the supervised GAN problem from a different perspective, one that is motivated by the philosophy of the famous Persian poet Rumi who said, "The art of knowing is knowing what to ignore." In the GAN framework, we not only provide the GAN positive data that it must learn to model, but also present it with so-called negative samples that it must learn to avoid - we call this "The Rumi Framework." This formulation allows the discriminator to represent the underlying target distribution better by learning to penalize generated samples that are undesirable - we show that this capability accelerates the learning process of the generator. We present a reformulation of the standard GAN (SGAN) and least-squares GAN (LSGAN) within the Rumi setting. The advantage of the reformulation is demonstrated by means of experiments conducted on MNIST, Fashion MNIST, CelebA, and CIFAR-10 datasets. Finally, we consider an application of the proposed formulation to address the important problem of learning an under-represented class in an unbalanced dataset. The Rumi approach results in substantially lower FID scores than the standard GAN frameworks while possessing better generalization capability.

READ FULL TEXT

page 6

page 9

page 16

page 17

page 20

page 21

page 22

research
02/13/2021

Multi-class Generative Adversarial Nets for Semi-supervised Image Classification

From generating never-before-seen images to domain adaptation, applicati...
research
02/12/2019

Learning Generative Models of Structured Signals from Their Superposition Using GANs with Application to Denoising and Demixing

Recently, Generative Adversarial Networks (GANs) have emerged as a popul...
research
05/26/2017

Bayesian GAN

Generative adversarial networks (GANs) can implicitly learn rich distrib...
research
10/23/2020

S2cGAN: Semi-Supervised Training of Conditional GANs with Fewer Labels

Generative adversarial networks (GANs) have been remarkably successful i...
research
11/16/2019

Self-supervised GAN: Analysis and Improvement with Multi-class Minimax Game

Self-supervised (SS) learning is a powerful approach for representation ...
research
06/13/2020

Unbiased Auxiliary Classifier GANs with MINE

Auxiliary Classifier GANs (AC-GANs) are widely used conditional generati...
research
12/25/2019

Effective Data Augmentation with Multi-Domain Learning GANs

For deep learning applications, the massive data development (e.g., coll...

Please sign up or login with your details

Forgot password? Click here to reset