GANs Settle Scores!

by   Siddarth Asokan, et al.

Generative adversarial networks (GANs) comprise a generator, trained to learn the underlying distribution of the desired data, and a discriminator, trained to distinguish real samples from those output by the generator. A majority of GAN literature focuses on understanding the optimality of the discriminator through integral probability metric (IPM) or divergence based analysis. In this paper, we propose a unified approach to analyzing the generator optimization through variational approach. In f-divergence-minimizing GANs, we show that the optimal generator is the one that matches the score of its output distribution with that of the data distribution, while in IPM GANs, we show that this optimal generator matches score-like functions, involving the flow-field of the kernel associated with a chosen IPM constraint space. Further, the IPM-GAN optimization can be seen as one of smoothed score-matching, where the scores of the data and the generator distributions are convolved with the kernel associated with the constraint. The proposed approach serves to unify score-based training and existing GAN flavors, leveraging results from normalizing flows, while also providing explanations for empirical phenomena such as the stability of non-saturating GAN losses. Based on these results, we propose novel alternatives to f-GAN and IPM-GAN training based on score and flow matching, and discriminator-guided Langevin sampling.


page 30

page 35

page 36

page 37

page 38

page 39

page 40

page 41


Collaborative GAN Sampling

Generative adversarial networks (GANs) have shown great promise in gener...

A Neural Tangent Kernel Perspective of GANs

Theoretical analyses for Generative Adversarial Networks (GANs) generall...

Non-parametric estimation of Jensen-Shannon Divergence in Generative Adversarial Network training

Generative Adversarial Networks (GANs) have become a widely popular fram...

Spider GAN: Leveraging Friendly Neighbors to Accelerate GAN Training

Training Generative adversarial networks (GANs) stably is a challenging ...

Assisting the Adversary to Improve GAN Training

We propose a method for improved training of generative adversarial netw...

MonoFlow: Rethinking Divergence GANs via the Perspective of Differential Equations

The conventional understanding of adversarial training in generative adv...

GANs as Gradient Flows that Converge

This paper approaches the unsupervised learning problem by gradient desc...

Please sign up or login with your details

Forgot password? Click here to reset