Tight Bounds for ℓ_p Oblivious Subspace Embeddings

01/13/2018
by   Ruosong Wang, et al.
0

An ℓ_p oblivious subspace embedding is a distribution over r × n matrices Π such that for any fixed n × d matrix A, _Π[for all x, Ax_p ≤Π Ax_p ≤κAx_p] ≥ 9/10, where r is the dimension of the embedding, κ is the distortion of the embedding, and for an n-dimensional vector y, y_p is the ℓ_p-norm. Another important property is the sparsity of Π, that is, the maximum number of non-zero entries per column, as this determines the running time of computing Π· A. While for p = 2 there are nearly optimal tradeoffs in terms of the dimension, distortion, and sparisty, for the important case of 1 ≤ p < 2, much less was known. In this paper we obtain nearly optimal tradeoffs for ℓ_p oblivious subspace embeddings for every 1 ≤ p < 2. We show for every 1 ≤ p < 2, any oblivious subspace embedding with dimension r has distortion κ = Ω(1/(1/d)^1 / p·^2 / pr + (r/n)^1 / p - 1 / 2). When r = poly(d) in applications, this gives a κ = Ω(d^1/p^-2/p d) lower bound, and shows the oblivious subspace embedding of Sohler and Woodruff (STOC, 2011) for p = 1 and the oblivious subspace embedding of Meng and Mahoney (STOC, 2013) for 1 < p < 2 are optimal up to poly((d)) factors. We also give sparse oblivious subspace embeddings for every 1 ≤ p < 2 which are optimal in dimension and distortion, up to poly( d) factors. Oblivious subspace embeddings are crucial for distributed and streaming environments, as well as entrywise ℓ_p low rank approximation. Our results give improved algorithms for these applications.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset

Sign in with Google

×

Use your Google Account to sign in to DeepAI

×

Consider DeepAI Pro