Projection-free Distributed Online Learning with Strongly Convex Losses

03/20/2021
by   Yuanyu Wan, et al.
0

To efficiently solve distributed online learning problems with complicated constraints, previous studies have proposed several distributed projection-free algorithms. The state-of-the-art one achieves the O(T^3/4) regret bound with O(√(T)) communication complexity. In this paper, we further exploit the strong convexity of loss functions to improve the regret bound and communication complexity. Specifically, we first propose a distributed projection-free algorithm for strongly convex loss functions, which enjoys a better regret bound of O(T^2/3log T) with smaller communication complexity of O(T^1/3). Furthermore, we demonstrate that the regret of distributed online algorithms with C communication rounds has a lower bound of Ω(T/C), even when the loss functions are strongly convex. This lower bound implies that the O(T^1/3) communication complexity of our algorithm is nearly optimal for obtaining the O(T^2/3log T) regret bound up to polylogarithmic factors. Finally, we extend our algorithm into the bandit setting and obtain similar theoretical guarantees.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset