Less Regret via Online Conditioning

02/25/2010
by   Matthew Streeter, et al.
0

We analyze and evaluate an online gradient descent algorithm with adaptive per-coordinate adjustment of learning rates. Our algorithm can be thought of as an online version of batch gradient descent with a diagonal preconditioner. This approach leads to regret bounds that are stronger than those of standard online gradient descent for general online convex optimization problems. Experimentally, we show that our algorithm is competitive with state-of-the-art algorithms for large scale machine learning problems.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset