Approximating the covariance ellipsoid

04/15/2018
by   Shahar Mendelson, et al.
0

We explore ways in which the covariance ellipsoid B={v ∈R^d : E <X,v>^2 ≤ 1} of a centred random vector X in R^d can be approximated by a simple set. The data one is given for constructing the approximating set consists of X_1,...,X_N that are independent and distributed as X. We present a general method that can be used to construct such approximations and implement it for two types of approximating sets. We first construct a (random) set K defined by a union of intersections of slabs H_z,α={v ∈R^d : |<z,v>| ≤α} (and therefore K is actually the output of a neural network with two hidden layers). The slabs are generated using X_1,...,X_N, and under minimal assumptions on X (e.g., X can be heavy-tailed) it suffices that N = c_1d η^-4(2/η) to ensure that (1-η) K⊂ B⊂ (1+η) K. In some cases (e.g., if X is rotation invariant and has marginals that are well behaved in some weak sense), a smaller sample size suffices: N = c_1dη^-2(2/η). We then show that if the slabs are replaced by randomly generated ellipsoids defined using X_1,...,X_N, the same degree of approximation is true when N ≥ c_2dη^-2(2/η). The construction we use is based on the small-ball method.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset