Conjugate Gradient Method for Generative Adversarial Networks

by   Hiroki Naganuma, et al.

While the generative model has many advantages, it is not feasible to calculate the Jensen-Shannon divergence of the density function of the data and the density function of the model of deep neural networks; for this reason, various alternative approaches have been developed. Generative adversarial networks (GANs) can be used to formulate this problem as a discriminative problem with two models, a generator and a discriminator whose learning can be formulated in the context of game theory and the local Nash equilibrium. Since this optimization is more difficult than minimization of a single objective function, we propose to apply the conjugate gradient method to solve the local Nash equilibrium problem in GANs. We give a proof and convergence analysis under mild assumptions showing that the proposed method converges to a local Nash equilibrium with three different learning-rate schedules including a constant learning rate. Furthermore, we demonstrate the convergence of a simple toy problem to a local Nash equilibrium and compare the proposed method with other optimization methods in experiments using real-world data, finding that the proposed method outperforms stochastic gradient descent (SGD) and momentum SGD.


page 3

page 12

page 24

page 25

page 26

page 27

page 28

page 29


Using Constant Learning Rate of Two Time-Scale Update Rule for Training Generative Adversarial Networks

Previous numerical results have shown that a two time-scale update rule ...

On the Nash equilibrium of moment-matching GANs for stationary Gaussian processes

Generative Adversarial Networks (GANs) learn an implicit generative mode...

GANs Trained by a Two Time-Scale Update Rule Converge to a Local Nash Equilibrium

Generative Adversarial Networks (GANs) excel at creating realistic image...

Negative Momentum for Improved Game Dynamics

Games generalize the optimization paradigm by introducing different obje...

λ-domain VVC Rate Control Based on Game Theory

Versatile Video Coding (VVC) has set a new milestone in high-efficiency ...

Interaction Matters: A Note on Non-asymptotic Local Convergence of Generative Adversarial Networks

Motivated by the pursuit of a systematic computational and algorithmic u...

Many Paths to Equilibrium: GANs Do Not Need to Decrease a Divergence At Every Step

Generative adversarial networks (GANs) are a family of generative models...

Please sign up or login with your details

Forgot password? Click here to reset