Non-stationary Online Convex Optimization with Arbitrary Delays

by   Yuanyu Wan, et al.

Online convex optimization (OCO) with arbitrary delays, in which gradients or other information of functions could be arbitrarily delayed, has received increasing attention recently. Different from previous studies that focus on stationary environments, this paper investigates the delayed OCO in non-stationary environments, and aims to minimize the dynamic regret with respect to any sequence of comparators. To this end, we first propose a simple algorithm, namely DOGD, which performs a gradient descent step for each delayed gradient according to their arrival order. Despite its simplicity, our novel analysis shows that DOGD can attain an O(√(dT)(P_T+1) dynamic regret bound in the worst case, where d is the maximum delay, T is the time horizon, and P_T is the path length of comparators. More importantly, in case delays do not change the arrival order of gradients, it can automatically reduce the dynamic regret to O(√(S)(1+P_T)), where S is the sum of delays. Furthermore, we develop an improved algorithm, which can reduce those dynamic regret bounds achieved by DOGD to O(√(dT(P_T+1))) and O(√(S(1+P_T))), respectively. The essential idea is to run multiple DOGD with different learning rates, and utilize a meta-algorithm to track the best one based on their delayed performance. Finally, we demonstrate that our improved algorithm is optimal in both cases by deriving a matching lower bound.


page 1

page 2

page 3

page 4


Bandit Convex Optimization in Non-stationary Environments

Bandit Convex Optimization (BCO) is a fundamental framework for modeling...

Adaptivity and Non-stationarity: Problem-dependent Dynamic Regret for Online Convex Optimization

We investigate online convex optimization in non-stationary environments...

Online Strongly Convex Optimization with Unknown Delays

We investigate the problem of online convex optimization with unknown de...

Adversarial Delays in Online Strongly-Convex Optimization

We consider the problem of strongly-convex online optimization in presen...

Improved Analysis for Dynamic Regret of Strongly Convex and Smooth Functions

In this paper, we present an improved analysis for dynamic regret of str...

Nonstationary Nonparametric Online Learning: Balancing Dynamic Regret and Model Parsimony

An open challenge in supervised learning is conceptual drift: a data poi...

AdaDelay: Delay Adaptive Distributed Stochastic Convex Optimization

We study distributed stochastic convex optimization under the delayed gr...

Please sign up or login with your details

Forgot password? Click here to reset