Effective Dimension of Exp-concave Optimization

05/21/2018
by   Naman Agarwal, et al.
0

We investigate the role of the effective (a.k.a. statistical) dimension in determining both the statistical and the computational costs associated with exp-concave stochastic minimization. We derive sample complexity bounds that scale with d_λ/ϵ, where d_λ is the effective dimension associated with the regularization parameter λ. These are the first fast rates in this setting that do not exhibit any explicit dependence either on the intrinsic dimension or the ℓ_2-norm of the optimal classifier. We also propose fast preconditioned methods that solve the ERM problem in time ((X)+_λ'>λλ'/λ d_λ'^2d), where (X) is the number of nonzero entries in the data. Our analysis emphasizes interesting connections between leverage scores, algorithmic stability and regularization. In particular, our algorithm involves a novel technique for choosing a regularization parameter λ' that minimizes the complexity bound λ'/λ d_λ'^2d, while avoiding the entire (approximate) computation of the effective dimension for each candidate λ'. All of our result extend to the kernel setting.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset