Improved Dynamic Regret for Online Frank-Wolfe

by   Yuanyu Wan, et al.

To deal with non-stationary online problems with complex constraints, we investigate the dynamic regret of online Frank-Wolfe (OFW), which is an efficient projection-free algorithm for online convex optimization. It is well-known that in the setting of offline optimization, the smoothness of functions and the strong convexity of functions accompanying specific properties of constraint sets can be utilized to achieve fast convergence rates for the Frank-Wolfe (FW) algorithm. However, for OFW, previous studies only establish a dynamic regret bound of O(√(T)(1+V_T+√(D_T))) by utilizing the convexity of problems, where T is the number of rounds, V_T is the function variation, and D_T is the gradient variation. In this paper, we derive improved dynamic regret bounds for OFW by extending the fast convergence rates of FW from offline optimization to online optimization. The key technique for this extension is to set the step size of OFW with a line search rule. In this way, we first show that the dynamic regret bound of OFW can be improved to O(√(T(1+V_T))) for smooth functions. Second, we achieve a better dynamic regret bound of O((1+V_T)^2/3T^1/3) when functions are smooth and strongly convex, and the constraint set is strongly convex. Finally, for smooth and strongly convex functions with minimizers in the interior of the constraint set, we demonstrate that the dynamic regret of OFW reduces to O(1+V_T), and can be further strengthened to O(min{P_T^∗,S_T^∗,V_T}+1) by performing a constant number of FW iterations per round, where P_T^∗ and S_T^∗ denote the path length and squared path length of minimizers, respectively.


page 1

page 2

page 3

page 4


Unconstrained Online Optimization: Dynamic Regret Analysis of Strongly Convex and Smooth Problems

The regret bound of dynamic online learning algorithms is often expresse...

Optimistic Online Mirror Descent for Bridging Stochastic and Adversarial Online Convex Optimization

Stochastically Extended Adversarial (SEA) model is introduced by Sachs e...

Trading-Off Static and Dynamic Regret in Online Least-Squares and Beyond

Recursive least-squares algorithms often use forgetting factors as a heu...

Improved Analysis for Dynamic Regret of Strongly Convex and Smooth Functions

In this paper, we present an improved analysis for dynamic regret of str...

Dynamic Regret of Online Mirror Descent for Relatively Smooth Convex Cost Functions

The performance of online convex optimization algorithms in a dynamic en...

Dynamic Regret for Strongly Adaptive Methods and Optimality of Online KRR

We consider the framework of non-stationary Online Convex Optimization w...

On the Online Frank-Wolfe Algorithms for Convex and Non-convex Optimizations

In this paper, the online variants of the classical Frank-Wolfe algorith...

Please sign up or login with your details

Forgot password? Click here to reset