Testing Sparsity-Inducing Penalties

12/18/2017
by   Maryclare Griffin, et al.
0

It is well understood that many penalized maximum likelihood estimators correspond to posterior mode estimators under specific prior distributions. Appropriateness of a particular class of penalty functions can therefore be interpreted as the appropriateness of a prior model for the parameters. For example, the appropriateness of a lasso penalty for regression coefficients depends on the extent to which the empirical distribution of the regression coefficients resembles a Laplace distribution. We give a simple approximate testing procedure of whether or not a Laplace prior model is appropriate and accordingly, whether or not using a lasso penalized estimate is appropriate. This testing procedure is designed to have power against exponential power prior models which correspond to ℓ_q penalties. Via simulations, we show that this testing procedure achieves the desired level and has enough power to detect violations of the Laplace assumption when the number of observations and number of unknown regression coefficients are large. We then introduce an adaptive procedure that chooses a more appropriate prior model and corresponding penalty from the class of exponential power prior models when the null hypothesis is rejected. We show that this computationally simple adaptive procedure can improve estimation of the unknown regression coefficients both when the unknown regression coefficients are drawn from an exponential power distribution and when the unknown regression coefficients are sparse and drawn from a spike-and-slab distribution.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/13/2019

Structured Shrinkage Priors

In many regression settings the unknown coefficients may have some known...
research
07/25/2022

Exponential Consistency of the M-estimators of Regression Coefficients with Multivariate Responses

In this brief note, we present the exponential consistency of the M-esti...
research
08/31/2016

The Bayesian SLOPE

The SLOPE estimates regression coefficients by minimizing a regularized ...
research
07/14/2022

Sparse Gaussian chain graphs with the spike-and-slab LASSO: Algorithms and asymptotics

The Gaussian chain graph model simultaneously parametrizes (i) the direc...
research
08/24/2020

Unified Bayesian asymptotic theory for sparse linear regression

We study frequentist asymptotic properties of Bayesian procedures for hi...
research
12/18/2017

A Power and Prediction Analysis for Knockoffs with Lasso Statistics

Knockoffs is a new framework for controlling the false discovery rate (F...
research
09/09/2022

A Laplace Mixture Representation of the Horseshoe and Some Implications

The horseshoe prior, defined as a half Cauchy scale mixture of normal, p...

Please sign up or login with your details

Forgot password? Click here to reset