Error bounds for sparse classifiers in high-dimensions

10/07/2018
by   Antoine Dedieu, et al.
0

We prove an L2 recovery bound for a family of sparse estimators defined as minimizers of some empirical loss functions -- which include hinge loss and logistic loss. More precisely, we achieve an upper-bound for coefficients estimation scaling as (k*/n)(p/k*): n,p is the size of the design matrix and k* the dimension of the theoretical loss minimizer. This is done under standard assumptions, for which we derive stronger versions of a cone condition and a restricted strong convexity. Our bound holds with high probability and in expectation and applies to an L1-regularized estimator and to a recently introduced Slope estimator, which we generalize for classification problems. Slope presents the advantage of adapting to unknown sparsity. Thus, we propose a tractable proximal algorithm to compute it and assess its empirical performance. Our results match the best existing bounds for classification and regression problems.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset