Mixed-Precision Random Projection for RandNLA on Tensor Cores

04/10/2023
by   Hiroyuki Ootomo, et al.
0

Random projection can reduce the dimension of data while capturing its structure and is a fundamental tool for machine learning, signal processing, and information retrieval, which deal with a large amount of data today. RandNLA (Randomized Numerical Linear Algebra) leverages random projection to reduce the computational complexity of low-rank decomposition of tensors and solve least-square problems. While the computation of the random projection is a simple matrix multiplication, its asymptotic computational complexity is typically larger than other operations in a RandNLA algorithm. Therefore, various studies propose methods for reducing its computational complexity. We propose a fast mixed-precision random projection method on NVIDIA GPUs using Tensor Cores for single-precision tensors. We exploit the fact that the random matrix requires less precision, and develop a highly optimized matrix multiplication between FP32 and FP16 matrices – SHGEMM (Single and Half-precision GEMM) – on Tensor Cores, where the random matrix is stored in FP16. Our method can compute Randomized SVD 1.28 times faster and Random projection high order SVD 1.75 times faster than baseline single-precision implementations while maintaining accuracy.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/07/2022

Recovering single precision accuracy from Tensor Cores while surpassing the FP32 theoretical peak performance

Tensor Core is a mixed-precision matrix-matrix multiplication unit on NV...
research
06/21/2023

DGEMM on Integer Matrix Multiplication Unit

Deep learning hardware achieves high throughput and low power consumptio...
research
01/29/2021

Performance of the low-rank tensor-train SVD (TT-SVD) for large dense tensors on modern multi-core CPUs

There are several factorizations of multi-dimensional tensors into lower...
research
06/06/2022

Dissecting Tensor Cores via Microbenchmarks: Latency, Throughput and Numerical Behaviors

Tensor Cores have been an important unit to accelerate Fused Matrix Mult...
research
10/05/2021

Efficient GPU implementation of randomized SVD and its applications

Matrix decompositions are ubiquitous in machine learning, including appl...
research
04/14/2022

cu_FastTucker: A Faster and Stabler Stochastic Optimization for Parallel Sparse Tucker Decomposition on Multi-GPUs

High-Order, High-Dimension, and Sparse Tensor (HOHDST) data originates f...
research
06/22/2020

Automatic Kernel Generation for Volta Tensor Cores

A commonly occurring computation idiom in neural networks is to perform ...

Please sign up or login with your details

Forgot password? Click here to reset