Unconstrained Online Learning with Unbounded Losses

06/08/2023
by   Andrew Jacobsen, et al.
0

Algorithms for online learning typically require one or more boundedness assumptions: that the domain is bounded, that the losses are Lipschitz, or both. In this paper, we develop a new setting for online learning with unbounded domains and non-Lipschitz losses. For this setting we provide an algorithm which guarantees R_T(u)≤Õ(Gu√(T)+Lu^2√(T)) regret on any problem where the subgradients satisfy g_t≤ G+Lw_t, and show that this bound is unimprovable without further assumptions. We leverage this algorithm to develop new saddle-point optimization algorithms that converge in duality gap in unbounded domains, even in the absence of meaningful curvature. Finally, we provide the first algorithm achieving non-trivial dynamic regret in an unbounded domain for non-Lipschitz losses, as well as a matching lower bound. The regret of our dynamic regret algorithm automatically improves to a novel L^* bound when the losses are smooth.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset