Minimax Optimal Online Stochastic Learning for Sequences of Convex Functions under Sub-Gradient Observation Failures

04/19/2019
by   Hakan Gokcesu, et al.
0

We study online convex optimization under stochastic sub-gradient observation faults, where we introduce adaptive algorithms with minimax optimal regret guarantees. We specifically study scenarios where our sub-gradient observations can be noisy or even completely missing in a stochastic manner. To this end, we propose algorithms based on sub-gradient descent method, which achieve tight minimax optimal regret bounds. When necessary, these algorithms utilize properties of the underlying stochastic settings to optimize their learning rates (step sizes). These optimizations are the main factor in providing the minimax optimal performance guarantees, especially when observations are stochastically missing. However, in real world scenarios, these properties of the underlying stochastic settings may not be revealed to the optimizer. For such a scenario, we propose a blind algorithm that estimates these properties empirically in a generally applicable manner. Through extensive experiments, we show that this empirical approach is a natural combination of regular stochastic gradient descent and the minimax optimal algorithms (which work best for randomized and adversarial function sequences, respectively).

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/23/2019

Necessary and Sufficient Conditions for Adaptive, Mirror, and Standard Gradient Methods

We study the impact of the constraint set and gradient geometry on the c...
research
06/30/2019

Efficient Online Convex Optimization with Adaptively Minimax Optimal Dynamic Regret

We introduce an online convex optimization algorithm using projected sub...
research
02/13/2020

An Optimal Multistage Stochastic Gradient Method for Minimax Problems

In this paper, we study the minimax optimization problem in the smooth a...
research
03/06/2023

Accelerated Rates between Stochastic and Adversarial Online Convex Optimization

Stochastic and adversarial data are two widely studied settings in onlin...
research
12/27/2022

Limitations of Information-Theoretic Generalization Bounds for Gradient Descent Methods in Stochastic Convex Optimization

To date, no "information-theoretic" frameworks for reasoning about gener...
research
09/01/2022

Optimal Regularized Online Convex Allocation by Adaptive Re-Solving

This paper introduces a dual-based algorithm framework for solving the r...
research
04/07/2016

Deep Online Convex Optimization with Gated Games

Methods from convex optimization are widely used as building blocks for ...

Please sign up or login with your details

Forgot password? Click here to reset