Zeroth-Order Algorithms for Smooth Saddle-Point Problems

09/21/2020
by   Abdurakhmon Sadiev, et al.
0

In recent years, the importance of saddle-point problems in machine learning has increased. This is due to the popularity of GANs. In this paper, we solve stochastic smooth (strongly) convex-concave saddle-point problems using zeroth-order oracles. Theoretical analysis shows that in the case when the optimization set is a simplex, we lose only log n times in the stochastic convergence term. The paper also provides an approach to solving saddle-point problems, when the oracle for one of the variables has zero order, and for the second - first order. Subsequently, we implement zeroth-order and 1/2th-order methods to solve practical problems.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset