Sharp Analysis of Epoch Stochastic Gradient Descent Ascent Methods for Min-Max Optimization

by   Yan Yan, et al.

Epoch gradient descent method (a.k.a. Epoch-GD) proposed by (Hazan and Kale, 2011) was deemed a breakthrough for stochastic strongly convex minimization, which achieves the optimal convergence rate of O(1/T) with T iterative updates for the objective gap. However, its extension to solving stochastic min-max problems with strong convexity and strong concavity still remains open, and it is still unclear whether a fast rate of O(1/T) for the duality gap is achievable for stochastic min-max optimization under strong convexity and strong concavity. Although some recent studies have proposed stochastic algorithms with fast convergence rates for min-max problems, they require additional assumptions about the problem, e.g., smoothness, bi-linear structure, etc. In this paper, we bridge this gap by providing a sharp analysis of epoch-wise stochastic gradient descent ascent method (referred to as Epoch-GDA) for solving strongly convex strongly concave (SCSC) min-max problems, without imposing any additional assumptions about smoothness or its structure. To the best of our knowledge, our result is the first one that shows Epoch-GDA can achieve the fast rate of O(1/T) for the duality gap of general SCSC min-max problems. We emphasize that such generalization of Epoch-GD for strongly convex minimization problems to Epoch-GDA for SCSC min-max problems is non-trivial and requires novel technical analysis. Moreover, we notice that the key lemma can be also used for proving the convergence of Epoch-GDA for weakly-convex strongly-concave min-max problems, leading to the best complexity as well without smoothness or other structural conditions.


page 1

page 2

page 3

page 4


Fast Objective and Duality Gap Convergence for Non-convex Strongly-concave Min-max Problems

This paper focuses on stochastic methods for solving smooth non-convex s...

Towards Better Understanding of Adaptive Gradient Algorithms in Generative Adversarial Nets

Adaptive gradient algorithms perform gradient-based updates using the hi...

Zeroth-Order Methods for Convex-Concave Minmax Problems: Applications to Decision-Dependent Risk Minimization

Min-max optimization is emerging as a key framework for analyzing proble...

Generative Minimization Networks: Training GANs Without Competition

Many applications in machine learning can be framed as minimization prob...

Last-iterate convergence rates for min-max optimization

We study the problem of finding min-max solutions for smooth two-input o...

Stochastic Projective Splitting: Solving Saddle-Point Problems with Multiple Regularizers

We present a new, stochastic variant of the projective splitting (PS) fa...

Saddle Point Optimization with Approximate Minimization Oracle and its Application to Robust Berthing Control

We propose an approach to saddle point optimization relying only on an o...

Please sign up or login with your details

Forgot password? Click here to reset