A Central Limit Theorem for Stochastic Saddle Point Optimization

06/09/2023
by   Abhishek Roy, et al.
0

In this work, we study the Uncertainty Quantification (UQ) of an algorithmic estimator of the saddle point of a strongly-convex strongly-concave objective. Specifically, we show that the averaged iterates of a Stochastic Extra-Gradient (SEG) method for a Saddle Point Problem (SPP) converges almost surely to the saddle point and follows a Central Limit Theorem (CLT) with optimal covariance under two different noise settings, namely the martingale-difference noise and the state-dependent Markov noise. To ensure the stability of the algorithm dynamics under the state-dependent Markov noise, we propose a variant of SEG with truncated varying sets. Our work opens the door for online inference of SPP with numerous potential applications in GAN, robust optimization, and reinforcement learning to name a few. We illustrate our results through numerical experiments.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset
Success!
Error Icon An error occurred

Sign in with Google

×

Use your Google Account to sign in to DeepAI

×

Consider DeepAI Pro