Single-Call Stochastic Extragradient Methods for Structured Non-monotone Variational Inequalities: Improved Analysis under Weaker Conditions

by   Sayantan Choudhury, et al.

Single-call stochastic extragradient methods, like stochastic past extragradient (SPEG) and stochastic optimistic gradient (SOG), have gained a lot of interest in recent years and are one of the most efficient algorithms for solving large-scale min-max optimization and variational inequalities problems (VIP) appearing in various machine learning tasks. However, despite their undoubted popularity, current convergence analyses of SPEG and SOG require a bounded variance assumption. In addition, several important questions regarding the convergence properties of these methods are still open, including mini-batching, efficient step-size selection, and convergence guarantees under different sampling strategies. In this work, we address these questions and provide convergence guarantees for two large classes of structured non-monotone VIPs: (i) quasi-strongly monotone problems (a generalization of strongly monotone problems) and (ii) weak Minty variational inequalities (a generalization of monotone and Minty VIPs). We introduce the expected residual condition, explain its benefits, and show how it can be used to obtain a strictly weaker bound than previously used growth conditions, expected co-coercivity, or bounded variance assumptions. Equipped with this condition, we provide theoretical guarantees for the convergence of single-call extragradient methods for different step-size selections, including constant, decreasing, and step-size-switching rules. Furthermore, our convergence analysis holds under the arbitrary sampling paradigm, which includes importance sampling and various mini-batching strategies as special cases.


page 6

page 7

page 8

page 9

page 10

page 11

page 12

page 23


Stochastic Extragradient: General Analysis and Improved Rates

The Stochastic Extragradient (SEG) method is one of the most popular alg...

SGD for Structured Nonconvex Functions: Learning Rates, Minibatching and Interpolation

We provide several convergence theorems for SGD for two large classes of...

Stochastic Gradient Descent-Ascent and Consensus Optimization for Smooth Games: Convergence Analysis under Expected Co-coercivity

Two of the most prominent algorithms for solving unconstrained smooth ga...

Stochastic Methods in Variational Inequalities: Ergodicity, Bias and Refinements

For min-max optimization and variational inequalities problems (VIP) enc...

Stochastic Variance Reduction for Variational Inequality Methods

We propose stochastic variance reduced algorithms for solving convex-con...

Forward-backward-forward methods with variance reduction for stochastic variational inequalities

We develop a new stochastic algorithm with variance reduction for solvin...

SAAGs: Biased Stochastic Variance Reduction Methods

Stochastic optimization is one of the effective approach to deal with th...

Please sign up or login with your details

Forgot password? Click here to reset