Capacity and Trainability in Recurrent Neural Networks

11/29/2016
by   Jasmine Collins, et al.
0

Two potential bottlenecks on the expressiveness of recurrent neural networks (RNNs) are their ability to store information about the task in their parameters, and to store information about the input history in their units. We show experimentally that all common RNN architectures achieve nearly the same per-task and per-unit capacity bounds with careful training, for a variety of tasks and stacking depths. They can store an amount of task information which is linear in the number of parameters, and is approximately 5 bits per parameter. They can additionally store approximately one real number from their input history per hidden unit. We further find that for several tasks it is the per-task parameter capacity bound that determines performance. These results suggest that many previous results comparing RNN architectures are driven primarily by differences in training effectiveness, rather than differences in capacity. Supporting this observation, we compare training difficulty for several architectures, and show that vanilla RNNs are far more difficult to train, yet have slightly higher capacity. Finally, we propose two novel RNN architectures, one of which is easier to train than the LSTM or GRU for deeply stacked architectures.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/28/2018

Tensor Decomposition for Compressing Recurrent Neural Network

In the machine learning fields, Recurrent Neural Network (RNN) has becom...
research
03/31/2016

Minimal Gated Unit for Recurrent Neural Networks

Recently recurrent neural networks (RNN) has been very successful in han...
research
08/30/2019

A single-layer RNN can approximate stacked and bidirectional RNNs, and topologies in between

To enhance the expressiveness and representational capacity of recurrent...
research
04/18/2020

A Formal Hierarchy of RNN Architectures

We develop a formal hierarchy of the expressive capacity of RNN architec...
research
04/24/2023

Adaptive-saturated RNN: Remember more with less instability

Orthogonal parameterization is a compelling solution to the vanishing gr...
research
05/03/2020

Teaching Recurrent Neural Networks to Modify Chaotic Memories by Example

The ability to store and manipulate information is a hallmark of computa...
research
02/12/2019

Capacity allocation analysis of neural networks: A tool for principled architecture design

Designing neural network architectures is a task that lies somewhere bet...

Please sign up or login with your details

Forgot password? Click here to reset