Lower Memory Oblivious (Tensor) Subspace Embeddings with Fewer Random Bits: Modewise Methods for Least Squares
In this paper new general modewise Johnson-Lindenstrauss (JL) subspace embeddings are proposed that are both considerably faster to generate and easier to store than traditional JL embeddings when working with extremely large vectors and/or tensors. Corresponding embedding results are then proven for two different types of low-dimensional (tensor) subspaces. The first of these new subspace embedding results produces improved space complexity bounds for embeddings of rank-r tensors whose CP decompositions are contained in the span of a fixed (but unknown) set of r rank-one basis tensors. In the traditional vector setting this first result yields new and very general near-optimal oblivious subspace embedding constructions that require fewer random bits to generate than standard JL embeddings when embedding subspaces of C^N spanned by basis vectors with special Kronecker structure. The second result proven herein provides new fast JL embeddings of arbitrary r-dimensional subspaces S⊂C^N which also require fewer random bits (and so are easier to store - i.e., require less space) than standard fast JL embedding methods in order to achieve small ϵ-distortions. These new oblivious subspace embedding results work by (i) effectively folding any given vector in S into a (not necessarily low-rank) tensor, and then (ii) embedding the resulting tensor into C^m for m ≤ C r log^c(N) / ϵ^2. Applications related to compression and fast compressed least squares solution methods are also considered, including those used for fitting low-rank CP decompositions, and the proposed JL embedding results are shown to work well numerically in both settings.
READ FULL TEXT