Hashing-like Johnson–Lindenstrauss transforms and their extreme singular values
The Johnson–Lindenstrauss (JL) lemma is a powerful tool for dimensionality reduction in modern algorithm design. The lemma states that any set of high-dimensional points in a Euclidean space can be flattened to lower dimensions while approximately preserving pairwise Euclidean distances. Random matrices satisfying this lemma are called JL transforms (JLTs). Inspired by existing s-hashing JLTs with exactly s nonzero elements on each column, the present work introduces s-hashing-like matrices whose expected number of nonzero elements on each column is s. Unlike s-hashing matrices, the independence of the sub-Gaussian entries of s-hashing-like matrices and the knowledge of their exact distribution play an important role in their analyses. Using properties of independent sub-Gaussian random variables, these matrices are demonstrated to be JLTs, and their smallest and largest singular values are estimated non-asymptotically in terms of known quantities using a technique from geometric functional analysis, that is, without any unknown “absolute constant” as is often the case in random matrix theory. By using the universal Bai–Yin law, these singular values are proved to converge almost surely to fixed quantities as the dimensions of the matrix grow to infinity. Numerical experiments suggest that s-hashing-like matrices are more effective in dimensionality reduction and have better behaviors of their extreme singular values than is borne out by our non-asymptotic analyses.
READ FULL TEXT