A Stochastic-Computing based Deep Learning Framework using Adiabatic Quantum-Flux-Parametron SuperconductingTechnology

07/22/2019
by   Ruizhe Cai, et al.
0

The Adiabatic Quantum-Flux-Parametron (AQFP) superconducting technology has been recently developed, which achieves the highest energy efficiency among superconducting logic families, potentially huge gain compared with state-of-the-art CMOS. In 2016, the successful fabrication and testing of AQFP-based circuits with the scale of 83,000 JJs have demonstrated the scalability and potential of implementing large-scale systems using AQFP. As a result, it will be promising for AQFP in high-performance computing and deep space applications, with Deep Neural Network (DNN) inference acceleration as an important example. Besides ultra-high energy efficiency, AQFP exhibits two unique characteristics: the deep pipelining nature since each AQFP logic gate is connected with an AC clock signal, which increases the difficulty to avoid RAW hazards; the second is the unique opportunity of true random number generation (RNG) using a single AQFP buffer, far more efficient than RNG in CMOS. We point out that these two characteristics make AQFP especially compatible with the stochastic computing (SC) technique, which uses a time-independent bit sequence for value representation, and is compatible with the deep pipelining nature. Further, the application of SC has been investigated in DNNs in prior work, and the suitability has been illustrated as SC is more compatible with approximate computations. This work is the first to develop an SC-based DNN acceleration framework using AQFP technology.

READ FULL TEXT

page 2

page 3

page 4

page 6

page 8

page 10

page 11

page 12

research
09/11/2023

P2LSG: Powers-of-2 Low-Discrepancy Sequence Generator for Stochastic Computing

Stochastic Computing (SC) is an unconventional computing paradigm proces...
research
02/18/2018

Towards Ultra-High Performance and Energy Efficiency of Deep Learning Systems: An Algorithm-Hardware Co-Optimization Framework

Hardware accelerations of deep learning systems have been extensively in...
research
04/20/2022

Multiply-and-Fire (MNF): An Event-driven Sparse Neural Network Accelerator

Machine learning, particularly deep neural network inference, has become...
research
06/22/2020

Fully-parallel Convolutional Neural Network Hardware

A new trans-disciplinary knowledge area, Edge Artificial Intelligence or...
research
03/25/2020

ESSOP: Efficient and Scalable Stochastic Outer Product Architecture for Deep Learning

Deep neural networks (DNNs) have surpassed human-level accuracy in a var...
research
09/22/2018

In-memory multiplication engine with SOT-MRAM based stochastic computing

Processing-in-memory (PIM) turns out to be a promising solution to break...
research
05/10/2018

Towards Budget-Driven Hardware Optimization for Deep Convolutional Neural Networks using Stochastic Computing

Recently, Deep Convolutional Neural Network (DCNN) has achieved tremendo...

Please sign up or login with your details

Forgot password? Click here to reset