Efficient Statistics for Sparse Graphical Models from Truncated Samples

06/17/2020
by   Arnab Bhattacharyya, et al.
0

In this paper, we study high-dimensional estimation from truncated samples. We focus on two fundamental and classical problems: (i) inference of sparse Gaussian graphical models and (ii) support recovery of sparse linear models. (i) For Gaussian graphical models, suppose d-dimensional samples x are generated from a Gaussian N(μ,Σ) and observed only if they belong to a subset S ⊆ℝ^d. We show that μ and Σ can be estimated with error ϵ in the Frobenius norm, using Õ(nz(Σ^-1)/ϵ^2) samples from a truncated 𝒩(μ,Σ) and having access to a membership oracle for S. The set S is assumed to have non-trivial measure under the unknown distribution but is otherwise arbitrary. (ii) For sparse linear regression, suppose samples ( x,y) are generated where y = x^⊤Ω^* + 𝒩(0,1) and ( x, y) is seen only if y belongs to a truncation set S ⊆ℝ. We consider the case that Ω^* is sparse with a support set of size k. Our main result is to establish precise conditions on the problem dimension d, the support size k, the number of observations n, and properties of the samples and the truncation that are sufficient to recover the support of Ω^*. Specifically, we show that under some mild assumptions, only O(k^2 log d) samples are needed to estimate Ω^* in the ℓ_∞-norm up to a bounded error. For both problems, our estimator minimizes the sum of the finite population negative log-likelihood function and an ℓ_1-regularization term.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset