Don't Be So Dense: Sparse-to-Sparse GAN Training Without Sacrificing Performance

by   Shiwei Liu, et al.
The University of Texas at Austin
TU Eindhoven

Generative adversarial networks (GANs) have received an upsurging interest since being proposed due to the high quality of the generated data. While achieving increasingly impressive results, the resource demands associated with the large model size hinders the usage of GANs in resource-limited scenarios. For inference, the existing model compression techniques can reduce the model complexity with comparable performance. However, the training efficiency of GANs has less been explored due to the fragile training process of GANs. In this paper, we, for the first time, explore the possibility of directly training sparse GAN from scratch without involving any dense or pre-training steps. Even more unconventionally, our proposed method enables directly training sparse unbalanced GANs with an extremely sparse generator from scratch. Instead of training full GANs, we start with sparse GANs and dynamically explore the parameter space spanned over the generator throughout training. Such a sparse-to-sparse training procedure enhances the capacity of the highly sparse generator progressively while sticking to a fixed small parameter budget with appealing training and inference efficiency gains. Extensive experiments with modern GAN architectures validate the effectiveness of our method. Our sparsified GANs, trained from scratch in one single run, are able to outperform the ones learned by expensive iterative pruning and re-training. Perhaps most importantly, we find instead of inheriting parameters from expensive pre-trained GANs, directly training sparse GANs from scratch can be a much more efficient solution. For example, only training with a 80 generator and a 70 performance than the dense BigGAN.


Double Dynamic Sparse Training for GANs

The past decade has witnessed a drastic increase in modern deep neural n...

Superposing Many Tickets into One: A Performance Booster for Sparse Neural Network Training

Recent works on sparse neural network training (sparse training) have sh...

Guiding GANs: How to control non-conditional pre-trained GANs for conditional image generation

Generative Adversarial Networks (GANs) are an arrange of two neural netw...

Ultra-Data-Efficient GAN Training: Drawing A Lottery Ticket First, Then Training It Toughly

Training generative adversarial networks (GANs) with limited data genera...

Host-Pathongen Co-evolution Inspired Algorithm Enables Robust GAN Training

Generative adversarial networks (GANs) are pairs of artificial neural ne...

Sparse Generative Adversarial Network

We propose a new approach to Generative Adversarial Networks (GANs) to a...

Supervised GAN Watermarking for Intellectual Property Protection

We propose a watermarking method for protecting the Intellectual Propert...

Please sign up or login with your details

Forgot password? Click here to reset