l_1-ball Prior: Uncertainty Quantification with Exact Zeros

06/02/2020
by   Maoran Xu, et al.
0

Lasso and l_1-regularization play a dominating role in high dimensional statistics and machine learning. The most attractive property is that it produces a sparse parameter estimate containing exact zeros. For uncertainty quantification, popular Bayesian approaches choose a continuous prior that puts concentrated mass near zero; however, as a limitation, the continuous posterior cannot be exactly sparse. This makes such a prior problematic for advanced models, such as the change-point detection, linear trend filtering and convex clustering, where zeros are crucial for dimension reduction. In this article, we propose a new class of prior, by projecting a continuous distribution onto the l_1-ball with radius r. This projection creates a positive probability on the lower-dimensional boundary of the ball, where the random variable now contains both continuous elements and exact zeros; meanwhile, assigning prior for radius r gives robustness to large signals. Compared with the spike-and-slab prior, our proposal has substantial flexibility in the prior specification and adaptive shrinkage on small signals; in addition, it enjoys an efficient optimization-based posterior estimation. In asymptotic theory, the prior attains the minimax optimal rate for the posterior concentration around the truth; in practice, it enables a direct application of the rich class of l_1-tricks in Bayesian models. We demonstrate its potentials in a data application of analyzing electroencephalogram time series data in human working memory study, using a non-parametric mixture model of linear trend filters.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset
Success!
Error Icon An error occurred

Sign in with Google

×

Use your Google Account to sign in to DeepAI

×

Consider DeepAI Pro