Learning without Concentration

01/01/2014
by   Shahar Mendelson, et al.
0

We obtain sharp bounds on the performance of Empirical Risk Minimization performed in a convex class and with respect to the squared loss, without assuming that class members and the target are bounded functions or have rapidly decaying tails. Rather than resorting to a concentration-based argument, the method used here relies on a `small-ball' assumption and thus holds for classes consisting of heavy-tailed functions and for heavy-tailed targets. The resulting estimates scale correctly with the `noise level' of the problem, and when applied to the classical, bounded scenario, always improve the known bounds.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset