High dimensional errors-in-variables models with dependent measurements

02/09/2015
by   Mark Rudelson, et al.
0

Suppose that we observe y ∈R^f and X ∈R^f × m in the following errors-in-variables model: y & = & X_0 β^* + ϵ X & = & X_0 + W where X_0 is a f × m design matrix with independent subgaussian row vectors, ϵ∈R^f is a noise vector and W is a mean zero f × m random noise matrix with independent subgaussian column vectors, independent of X_0 and ϵ. This model is significantly different from those analyzed in the literature in the sense that we allow the measurement error for each covariate to be a dependent vector across its f observations. Such error structures appear in the science literature when modeling the trial-to-trial fluctuations in response strength shared across a set of neurons. Under sparsity and restrictive eigenvalue type of conditions, we show that one is able to recover a sparse vector β^* ∈R^m from the model given a single observation matrix X and the response vector y. We establish consistency in estimating β^* and obtain the rates of convergence in the ℓ_q norm, where q = 1, 2 for the Lasso-type estimator, and for q ∈ [1, 2] for a Dantzig-type conic programming estimator. We show error bounds which approach that of the regular Lasso and the Dantzig selector in case the errors in W are tending to 0.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset