Enhanced First and Zeroth Order Variance Reduced Algorithms for Min-Max Optimization

06/16/2020
by   Tengyu Xu, et al.
0

Min-max optimization captures many important machine learning problems such as robust adversarial learning and inverse reinforcement learning, and nonconvex-strongly-concave min-max optimization has been an active line of research. Specifically, a novel variance reduction algorithm SREDA was proposed recently by (Luo et al. 2020) to solve such a problem, and was shown to achieve the optimal complexity dependence on the required accuracy level ϵ. Despite the superior theoretical performance, the convergence guarantee of SREDA requires stringent initialization accuracy and an ϵ-dependent stepsize for controlling the per-iteration progress, so that SREDA can run very slowly in practice. This paper develops a novel analytical framework that guarantees the SREDA's optimal complexity performance for a much enhanced algorithm SREDA-Boost, which has less restrictive initialization requirement and an accuracy-independent (and much bigger) stepsize. Hence, SREDA-Boost runs substantially faster in experiments than SREDA. We further apply SREDA-Boost to propose a zeroth-order variance reduction algorithm named ZO-SREDA-Boost for the scenario that has access only to the information about function values not gradients, and show that ZO-SREDA-Boost outperforms the best known complexity dependence on ϵ. This is the first study that applies the variance reduction technique to zeroth-order algorithm for min-max optimization problems.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset