Improved Generalization Bound and Learning of Sparsity Patterns for Data-Driven Low-Rank Approximation

09/17/2022
by   Shinsaku Sakaue, et al.
0

Learning sketching matrices for fast and accurate low-rank approximation (LRA) has gained increasing attention. Recently, Bartlett, Indyk, and Wagner (COLT 2022) presented a generalization bound for the learning-based LRA. Specifically, for rank-k approximation using an m × n learned sketching matrix with s non-zeros in each column, they proved an Õ(nsm) bound on the fat shattering dimension (Õ hides logarithmic factors). We build on their work and make two contributions. 1. We present a better Õ(nsk) bound (k ≤ m). En route to obtaining this result, we give a low-complexity Goldberg–Jerrum algorithm for computing pseudo-inverse matrices, which would be of independent interest. 2. We alleviate an assumption of the previous study that sketching matrices have a fixed sparsity pattern. We prove that learning positions of non-zeros increases the fat shattering dimension only by O(nslog n). In addition, experiments confirm the practical benefit of learning sparsity patterns.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset