Lazy Lagrangians with Predictions for Online Learning

01/08/2022
by   Daron Anderson, et al.
0

We consider the general problem of online convex optimization with time-varying additive constraints in the presence of predictions for the next cost and constraint functions. A novel primal-dual algorithm is designed by combining a Follow-The-Regularized-Leader iteration with prediction-adaptive dynamic steps. The algorithm achieves 𝒪(T^3-β/4) regret and 𝒪(T^1+β/2) constraint violation bounds that are tunable via parameter β∈[1/2,1) and have constant factors that shrink with the predictions quality, achieving eventually 𝒪(1) regret for perfect predictions. Our work extends the FTRL framework for this constrained OCO setting and outperforms the respective state-of-the-art greedy-based solutions, without imposing conditions on the quality of predictions, the cost functions or the geometry of constraints, beyond convexity.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset