Hardness Results for Minimizing the Covariance of Randomly Signed Sum of Vectors

11/26/2022
by   Peng Zhang, et al.
0

Given vectors 𝕧_1, …, 𝕧_n ∈ℝ^d with Euclidean norm at most 1 and 𝕩_0 ∈ [-1,1]^n, our goal is to sample a random signing 𝕩∈{± 1}^n with 𝔼[𝕩] = 𝕩_0 such that the operator norm of the covariance of the signed sum of the vectors ∑_i=1^n 𝕩(i) 𝕧_i is as small as possible. This problem arises from the algorithmic discrepancy theory and its application in the design of randomized experiments. It is known that one can sample a random signing with expectation 𝕩_0 and the covariance operator norm at most 1. In this paper, we prove two hardness results for this problem. First, we show it is NP-hard to distinguish a list of vectors for which there exists a random signing with expectation 0 such that the operator norm is 0 from those for which any signing with expectation 0 must have the operator norm Ω(1). Second, we consider 𝕩_0 ∈ [-1,1]^n whose entries are all around an arbitrarily fixed p ∈ [-1,1]. We show it is NP-hard to distinguish a list of vectors for which there exists a random signing with expectation 𝕩_0 such that the operator norm is 0 from those for which any signing with expectation 0 must have the operator norm Ω((1-|p|)^2).

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset