On the asymptotic properties of SLOPE

08/23/2019
by   Michał Kos, et al.
0

Sorted L-One Penalized Estimator (SLOPE) is a relatively new convex optimization procedure for selecting predictors in large data bases. Contrary to LASSO, SLOPE has been proved to be asymptotically minimax in the context of sparse high-dimensional generalized linear models. Additionally, in case when the design matrix is orthogonal, SLOPE with the sequence of tuning parameters λ^BH, corresponding to the sequence of decaying thresholds for the Benjamini-Hochberg multiple testing correction, provably controls False Discovery Rate in the multiple regression model. In this article we provide new asymptotic results on the properties of SLOPE when the elements of the design matrix are iid random variables from the Gaussian distribution. Specifically, we provide the conditions, under which the asymptotic FDR of SLOPE based on the sequence λ^BH converges to zero and the power converges to 1. We illustrate our theoretical asymptotic results with extensive simulation study. We also provide precise formulas describing FDR of SLOPE under different loss functions, which sets the stage for future results on the model selection properties of SLOPE and its extensions.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset