Combining Learning and Optimization for Transprecision Computing

02/24/2020
by   Andrea Borghesi, et al.
0

The growing demands of the worldwide IT infrastructure stress the need for reduced power consumption, which is addressed in so-called transprecision computing by improving energy efficiency at the expense of precision. For example, reducing the number of bits for some floating-point operations leads to higher efficiency, but also to a non-linear decrease of the computation accuracy. Depending on the application, small errors can be tolerated, thus allowing to fine-tune the precision of the computation. Finding the optimal precision for all variables in respect of an error bound is a complex task, which is tackled in the literature via heuristics. In this paper, we report on a first attempt to address the problem by combining a Mathematical Programming (MP) model and a Machine Learning (ML) model, following the Empirical Model Learning methodology. The ML model learns the relation between variables precision and the output error; this information is then embedded in the MP focused on minimizing the number of bits. An additional refinement phase is then added to improve the quality of the solution. The experimental results demonstrate an average speedup of 6.5% and a 3% increase in solution quality compared to the state-of-the-art. In addition, experiments on a hardware platform capable of mixed-precision arithmetic (PULPissimo) show the benefits of the proposed approach, with energy savings of around 40% compared to fixed-precision.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/28/2017

A Transprecision Floating-Point Platform for Ultra-Low Power Computing

In modern low-power embedded platforms, floating-point (FP) operations e...
research
02/09/2022

Deep Neural Networks to Correct Sub-Precision Errors in CFD

Loss of information in numerical simulations can arise from various sour...
research
10/09/2016

Doing Moore with Less -- Leapfrogging Moore's Law with Inexactness for Supercomputing

Energy and power consumption are major limitations to continued scaling ...
research
07/27/2021

Accelerated Multiple Precision Direct Method and Mixed Precision Iterative Refinement on Python Programming Environment

Current Python programming environment does not have any reliable and ef...
research
06/20/2017

Improving text classification with vectors of reduced precision

This paper presents the analysis of the impact of a floating-point numbe...
research
06/16/2020

Multi-Precision Policy Enforced Training (MuPPET): A precision-switching strategy for quantised fixed-point training of CNNs

Large-scale convolutional neural networks (CNNs) suffer from very long t...
research
11/12/2021

BSC: Block-based Stochastic Computing to Enable Accurate and Efficient TinyML

Along with the progress of AI democratization, machine learning (ML) has...

Please sign up or login with your details

Forgot password? Click here to reset