Variance-Reduced Decentralized Stochastic Optimization with Gradient Tracking – Part II: GT-SVRG

10/08/2019
by   Ran Xin, et al.
0

Decentralized stochastic optimization has recently benefited from gradient tracking methods <cit.> providing efficient solutions for large-scale empirical risk minimization problems. In Part I <cit.> of this work, we develop GT-SAGA that is based on a decentralized implementation of SAGA <cit.> using gradient tracking and discuss regimes of practical interest where GT-SAGA outperforms existing decentralized approaches in terms of the total number of local gradient computations. In this paper, we describe GT-SVRG that develops a decentralized gradient tracking based implementation of SVRG <cit.>, another well-known variance-reduction technique. We show that the convergence rate of GT-SVRG matches that of GT-SAGA for smooth and strongly-convex functions and highlight different trade-offs between the two algorithms in various settings.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset