Optimistic Dual Extrapolation for Coherent Non-monotone Variational Inequalities

by   Chaobing Song, et al.

The optimization problems associated with training generative adversarial neural networks can be largely reduced to certain non-monotone variational inequality problems (VIPs), whereas existing convergence results are mostly based on monotone or strongly monotone assumptions. In this paper, we propose optimistic dual extrapolation (OptDE), a method that only performs one gradient evaluation per iteration. We show that OptDE is provably convergent to a strong solution under different coherent non-monotone assumptions. In particular, when a weak solution exists, the convergence rate of our method is O(1/ϵ^2), which matches the best existing result of the methods with two gradient evaluations. Further, when a σ-weak solution exists, the convergence guarantee is improved to the linear rate O(log1/ϵ). Along the way–as a byproduct of our inquiries into non-monotone variational inequalities–we provide the near-optimal O(1/ϵlog1/ϵ) convergence guarantee in terms of restricted strong merit function for monotone variational inequalities. We also show how our results can be naturally generalized to the stochastic setting, and obtain corresponding new convergence results. Taken together, our results contribute to the broad landscape of variational inequality–both non-monotone and monotone alike–by providing a novel and more practical algorithm with the state-of-the-art convergence guarantees.


page 1

page 2

page 3

page 4


Extragradient Method: O(1/K) Last-Iterate Convergence for Monotone Variational Inequalities and Connections With Cocoercivity

Extragradient method (EG) Korpelevich [1976] is one of the most popular ...

On the convergence of single-call stochastic extra-gradient methods

Variational inequalities have recently attracted considerable interest i...

Solving stochastic weak Minty variational inequalities without increasing batch size

This paper introduces a family of stochastic extragradient-type algorith...

Distributed Extra-gradient with Optimal Complexity and Communication Guarantees

We consider monotone variational inequality (VI) problems in multi-GPU s...

Signal recovery by Stochastic Optimization

We discuss an approach to signal recovery in Generalized Linear Models (...

An Augmented Lagrangian Approach to Conically Constrained Non-monotone Variational Inequality Problems

In this paper we consider a non-monotone (mixed) variational inequality ...

Please sign up or login with your details

Forgot password? Click here to reset