Is Generator Conditioning Causally Related to GAN Performance?

02/23/2018
by   Augustus Odena, et al.
0

Recent work (Pennington et al, 2017) suggests that controlling the entire distribution of Jacobian singular values is an important design consideration in deep learning. Motivated by this, we study the distribution of singular values of the Jacobian of the generator in Generative Adversarial Networks (GANs). We find that this Jacobian generally becomes ill-conditioned at the beginning of training. Moreover, we find that the average (with z from p(z)) conditioning of the generator is highly predictive of two other ad-hoc metrics for measuring the 'quality' of trained GANs: the Inception Score and the Frechet Inception Distance (FID). We test the hypothesis that this relationship is causal by proposing a 'regularization' technique (called Jacobian Clamping) that softly penalizes the condition number of the generator Jacobian. Jacobian Clamping improves the mean Inception Score and the mean FID for GANs trained on several datasets. It also greatly reduces inter-run variance of the aforementioned scores, addressing (at least partially) one of the main criticisms of GANs.

READ FULL TEXT

page 3

page 8

page 14

page 15

page 16

research
04/09/2020

Score-Guided Generative Adversarial Networks

We propose a Generative Adversarial Network (GAN) that introduces an eva...
research
05/16/2019

On Conditioning GANs to Hierarchical Ontologies

The recent success of Generative Adversarial Networks (GAN) is a result ...
research
05/12/2023

Spider GAN: Leveraging Friendly Neighbors to Accelerate GAN Training

Training Generative adversarial networks (GANs) stably is a challenging ...
research
02/27/2020

Topology Distance: A Topology-Based Approach For Evaluating Generative Adversarial Networks

Automatic evaluation of the goodness of Generative Adversarial Networks ...
research
07/25/2022

Stable Parallel Training of Wasserstein Conditional Generative Adversarial Neural Networks

We propose a stable, parallel approach to train Wasserstein Conditional ...
research
11/19/2018

Stackelberg GAN: Towards Provable Minimax Equilibrium via Multi-Generator Architectures

We study the problem of alleviating the instability issue in the GAN tra...
research
10/29/2019

Small-GAN: Speeding Up GAN Training Using Core-sets

Recent work by Brock et al. (2018) suggests that Generative Adversarial ...

Please sign up or login with your details

Forgot password? Click here to reset