Accelerating In-Browser Deep Learning Inference on Diverse Edge Clients through Just-in-Time Kernel Optimizations

by   Fucheng Jia, et al.

Web applications are increasingly becoming the primary platform for AI service delivery, making in-browser deep learning (DL) inference more prominent. However, current in-browser inference systems fail to effectively utilize advanced web programming techniques and customize kernels for various client devices, leading to suboptimal performance. To address the issues, this paper presents the first in-browser inference system, nn-JIT.web, which enables just-in-time (JIT) auto-generation of optimized kernels for both CPUs and GPUs during inference. The system achieves this by using two novel web programming techniques that can significantly reduce kernel generation time, compared to other tensor compilers such as TVM, while maintaining or even improving performance. The first technique, Tensor-Web Compiling Co-Design, lowers compiling costs by unifying tensor and web compiling and eliminating redundant and ineffective compiling passes. The second technique, Web-Specific Lite Kernel Optimization Space Design, reduces kernel tuning costs by focusing on web programming requirements and efficient hardware resource utilization, limiting the optimization space to only dozens. nn-JIT.web is evaluated for modern transformer models on a range of client devices, including the mainstream CPUs and GPUs from ARM, Intel, AMD and Nvidia. Results show that nn-JIT.web can achieve up to 8.2x faster within 30 seconds compared to the baselines across various models.


CMSIS-NN: Efficient Neural Network Kernels for Arm Cortex-M CPUs

Deep Neural Networks are becoming increasingly popular in always-on IoT ...

Kernel Pre-Training in Feature Space via m-Kernels

This paper presents a novel approach to kernel tuning. The method presen...

High-Performance Deep Learning via a Single Building Block

Deep learning (DL) is one of the most prominent branches of machine lear...

Hector: Using Untrusted Browsers to Provision Web Applications

Web applications are on the rise and rapidly evolve into more and more m...

LUT-NN: Towards Unified Neural Network Inference by Table Lookup

DNN inference requires huge effort of system development and resource co...

LoopTune: Optimizing Tensor Computations with Reinforcement Learning

Advanced compiler technology is crucial for enabling machine learning ap...

SoftNeuro: Fast Deep Inference using Multi-platform Optimization

Faster inference of deep learning models is highly demanded on edge devi...

Please sign up or login with your details

Forgot password? Click here to reset