Parallelizing Legendre Memory Unit Training

02/22/2021
by   Narsimha Chilkuri, et al.
0

Recently, a new recurrent neural network (RNN) named the Legendre Memory Unit (LMU) was proposed and shown to achieve state-of-the-art performance on several benchmark datasets. Here we leverage the linear time-invariant (LTI) memory component of the LMU to construct a simplified variant that can be parallelized during training (and yet executed as an RNN during inference), thus overcoming a well known limitation of training RNNs on GPUs. We show that this reformulation that aids parallelizing, which can be applied generally to any deep network whose recurrent components are linear, makes training up to 200 times faster. Second, to validate its utility, we compare its performance against the original LMU and a variety of published LSTM and transformer networks on seven benchmarks, ranging from psMNIST to sentiment analysis to machine translation. We demonstrate that our models exhibit superior performance on all datasets, often using fewer parameters. For instance, our LMU sets a new state-of-the-art result on psMNIST, and uses half the parameters while outperforming DistilBERT and LSTM models on IMDB sentiment analysis.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/07/2018

Effective Quantization Approaches for Recurrent Neural Networks

Deep learning, and in particular Recurrent Neural Networks (RNN) have sh...
research
08/29/2017

Gradual Learning of Deep Recurrent Neural Networks

Deep Recurrent Neural Networks (RNNs) achieve state-of-the-art results i...
research
12/19/2018

Self-Attention: A Better Building Block for Sentiment Analysis Neural Network Classifiers

Sentiment Analysis has seen much progress in the past two decades. For t...
research
11/20/2017

E-PUR: An Energy-Efficient Processing Unit for Recurrent Neural Networks

Recurrent Neural Networks (RNNs) are a key technology for emerging appli...
research
09/19/2020

Towards Computational Linguistics in Minangkabau Language: Studies on Sentiment Analysis and Machine Translation

Although some linguists (Rusmali et al., 1985; Crouch, 2009) have fairly...
research
09/14/2021

Oscillatory Fourier Neural Network: A Compact and Efficient Architecture for Sequential Processing

Tremendous progress has been made in sequential processing with the rece...
research
06/08/2015

Learning to Transduce with Unbounded Memory

Recently, strong results have been demonstrated by Deep Recurrent Neural...

Please sign up or login with your details

Forgot password? Click here to reset