Small-GAN: Speeding Up GAN Training Using Core-sets

by   Samarth Sinha, et al.

Recent work by Brock et al. (2018) suggests that Generative Adversarial Networks (GANs) benefit disproportionately from large mini-batch sizes. Unfortunately, using large batches is slow and expensive on conventional hardware. Thus, it would be nice if we could generate batches that were effectively large though actually small. In this work, we propose a method to do this, inspired by the use of Coreset-selection in active learning. When training a GAN, we draw a large batch of samples from the prior and then compress that batch using Coreset-selection. To create effectively large batches of 'real' images, we create a cached dataset of Inception activations of each training image, randomly project them down to a smaller dimension, and then use Coreset-selection on those projected activations at training time. We conduct experiments showing that this technique substantially reduces training time and memory usage for modern GAN variants, that it reduces the fraction of dropped modes in a synthetic dataset, and that it allows GANs to reach a new state of the art in anomaly detection.


page 1

page 2

page 3

page 4


Stylized Projected GAN: A Novel Architecture for Fast and Realistic Image Generation

Generative Adversarial Networks are used for generating the data using a...

Image Generation and Recognition (Emotions)

Generative Adversarial Networks (GANs) were proposed in 2014 by Goodfell...

Unsupervised Learning of Anomaly Detection from Contaminated Image Data using Simultaneous Encoder Training

Anomaly detection in high-dimensional data, such as images, is a challen...

Dynamics of Fourier Modes in Torus Generative Adversarial Networks

Generative Adversarial Networks (GANs) are powerful Machine Learning mod...

Ultra-Data-Efficient GAN Training: Drawing A Lottery Ticket First, Then Training It Toughly

Training generative adversarial networks (GANs) with limited data genera...

Is Generator Conditioning Causally Related to GAN Performance?

Recent work (Pennington et al, 2017) suggests that controlling the entir...

Reducing Noise in GAN Training with Variance Reduced Extragradient

Using large mini-batches when training generative adversarial networks (...

Please sign up or login with your details

Forgot password? Click here to reset