Faster CNNs with Direct Sparse Convolutions and Guided Pruning

08/04/2016
by   Jongsoo Park, et al.
0

Phenomenally successful in practical inference problems, convolutional neural networks (CNN) are widely deployed in mobile devices, data centers, and even supercomputers. The number of parameters needed in CNNs, however, are often large and undesirable. Consequently, various methods have been developed to prune a CNN once it is trained. Nevertheless, the resulting CNNs offer limited benefits. While pruning the fully connected layers reduces a CNN's size considerably, it does not improve inference speed noticeably as the compute heavy parts lie in convolutions. Pruning CNNs in a way that increase inference speed often imposes specific sparsity structures, thus limiting the achievable sparsity levels. We present a method to realize simultaneously size economy and speed improvement while pruning CNNs. Paramount to our success is an efficient general sparse-with-dense matrix multiplication implementation that is applicable to convolution of feature maps with kernels of arbitrary sparsity patterns. Complementing this, we developed a performance model that predicts sweet spots of sparsity levels for different layers and on different computer architectures. Together, these two allow us to demonstrate 3.1--7.3× convolution speedups over dense convolution in AlexNet, on Intel Atom, Xeon, and Xeon Phi processors, spanning the spectrum from mobile devices to supercomputers. We also open source our project at https://github.com/IntelLabs/SkimCaffe.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
08/31/2016

Pruning Filters for Efficient ConvNets

The success of CNNs in various applications is accompanied by a signific...
research
02/28/2017

Enabling Sparse Winograd Convolution by Native Pruning

Sparse methods and the use of Winograd convolutions are two orthogonal a...
research
02/28/2018

Escort: Efficient Sparse Convolutional Neural Networks on GPUs

Deep neural networks have achieved remarkable accuracy in many artificia...
research
02/18/2018

Efficient Sparse-Winograd Convolutional Neural Networks

Convolutional Neural Networks (CNNs) are computationally intensive, whic...
research
12/17/2022

FSCNN: A Fast Sparse Convolution Neural Network Inference System

Convolution neural networks (CNNs) have achieved remarkable success, but...
research
06/17/2022

PICO: Pipeline Inference Framework for Versatile CNNs on Diverse Mobile Devices

Recent researches in artificial intelligence have proposed versatile con...
research
01/11/2019

Low Precision Constant Parameter CNN on FPGA

We report FPGA implementation results of low precision CNN convolution l...

Please sign up or login with your details

Forgot password? Click here to reset