Minimax Regret of Switching-Constrained Online Convex Optimization: No Phase Transition
We study the problem of switching-constrained online convex optimization (OCO), where the player has a limited number of opportunities to change her action. While the discrete analog of this online learning task has been studied extensively, previous work in the continuous setting has neither established the minimax rate nor algorithmically achieved it. We here show that T-round switching-constrained OCO with fewer than K switches has a minimax regret of Θ(T/√(K)). In particular, it is at least T/√(2K) for one dimension and at least T/√(K) for higher dimensions. The lower bound in higher dimensions is attained by an orthogonal subspace argument. The minimax analysis in one dimension is more involved. To establish the one-dimensional result, we introduce the fugal game relaxation, whose minimax regret lower bounds that of switching-constrained OCO. We show that the minimax regret of the fugal game is at least T/√(2K) and thereby establish the minimax lower bound in one dimension. We next show that a mini-batching algorithm provides an O(T/√(K)) upper bound, and therefore we conclude that the minimax regret of switching-constrained OCO is Θ(T/√(K)) for any K. This is in sharp contrast to its discrete counterpart, the switching-constrained prediction-from-experts problem, which exhibits a phase transition in minimax regret between the low-switching and high-switching regimes. In the case of bandit feedback, we first determine a novel linear (in T) minimax regret for bandit linear optimization against the strongly adaptive adversary of OCO, implying that a slightly weaker adversary is appropriate. We also establish the minimax regret of switching-constrained bandit convex optimization in dimension n>2 to be Θ̃(T/√(K)).
READ FULL TEXT