Penalized regression with multiple loss functions and selection by vote

06/29/2020
by   Guorong Dai, et al.
0

This article considers a linear model in a high dimensional data scenario. We propose a process which uses multiple loss functions both to select relevant predictors and to estimate parameters, and study its asymptotic properties. Variable selection is conducted by a procedure called "vote", which aggregates results from penalized loss functions. Using multiple objective functions separately simplifies algorithms and allows parallel computing, which is convenient and fast. As a special example we consider a quantile regression model, which optimally combines multiple quantile levels. We show that the resulting estimators for the parameter vector are asymptotically efficient. Simulations and a data application confirm the three main advantages of our approach: (a) reducing the false discovery rate of variable selection; (b) improving the quality of parameter estimation; (c) increasing the efficiency of computation.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset
Success!
Error Icon An error occurred

Sign in with Google

×

Use your Google Account to sign in to DeepAI

×

Consider DeepAI Pro