Private High-Dimensional Hypothesis Testing
We provide improved differentially private algorithms for identity testing of high-dimensional distributions. Specifically, for d-dimensional Gaussian distributions with known covariance Σ, we can test whether the distribution comes from 𝒩(μ^*, Σ) for some fixed μ^* or from some 𝒩(μ, Σ) with total variation distance at least α from 𝒩(μ^*, Σ) with (ε, 0)-differential privacy, using only Õ(d^1/2/α^2 + d^1/3/α^4/3·ε^2/3 + 1/α·ε) samples if the algorithm is allowed to be computationally inefficient, and only Õ(d^1/2/α^2 + d^1/4/α·ε) samples for a computationally efficient algorithm. We also provide a matching lower bound showing that our computationally inefficient algorithm has optimal sample complexity. We also extend our algorithms to various related problems, including mean testing of Gaussians with bounded but unknown covariance, uniformity testing of product distributions over {± 1}^d, and tolerant testing. Our results improve over the previous best work of Canonne, Kamath, McMillan, Ullman, and Zakynthinou <cit.> for both computationally efficient and inefficient algorithms, and even our computationally efficient algorithm matches the optimal non-private sample complexity of O(√(d)/α^2) in many standard parameter settings. In addition, our results show that, surprisingly, private identity testing of d-dimensional Gaussians can be done with fewer samples than private identity testing of discrete distributions over a domain of size d <cit.>, which refutes a conjectured lower bound of Canonne et al. <cit.>.
READ FULL TEXT