Reduced-Precision Strategies for Bounded Memory in Deep Neural Nets

11/17/2015
by   Patrick Judd, et al.
0

This work investigates how using reduced precision data in Convolutional Neural Networks (CNNs) affects network accuracy during classification. More specifically, this study considers networks where each layer may use different precision data. Our key result is the observation that the tolerance of CNNs to reduced precision data not only varies across networks, a well established observation, but also within networks. Tuning precision per layer is appealing as it could enable energy and performance improvements. In this paper we study how error tolerance across layers varies and propose a method for finding a low precision configuration for a network while maintaining high accuracy. A diverse set of CNNs is analyzed showing that compared to a conventional implementation using a 32-bit floating-point representation for all layers, and with less than 1 these networks can be reduced by an average of 74

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/29/2019

Mixed Precision Training With 8-bit Floating Point

Reduced precision computation for deep neural networks is one of the key...
research
12/13/2022

Numerical Stability of DeepGOPlus Inference

Convolutional neural networks (CNNs) are currently among the most widely...
research
06/16/2020

Multi-Precision Policy Enforced Training (MuPPET): A precision-switching strategy for quantised fixed-point training of CNNs

Large-scale convolutional neural networks (CNNs) suffer from very long t...
research
06/07/2017

ShiftCNN: Generalized Low-Precision Architecture for Inference of Convolutional Neural Networks

In this paper we introduce ShiftCNN, a generalized low-precision archite...
research
01/31/2023

Training with Mixed-Precision Floating-Point Assignments

When training deep neural networks, keeping all tensors in high precisio...
research
01/30/2023

The Hidden Power of Pure 16-bit Floating-Point Neural Networks

Lowering the precision of neural networks from the prevalent 32-bit prec...
research
06/28/2021

Reducing numerical precision preserves classification accuracy in Mondrian Forests

Mondrian Forests are a powerful data stream classification method, but t...

Please sign up or login with your details

Forgot password? Click here to reset