Gradual Learning of Deep Recurrent Neural Networks

08/29/2017
by   Ziv Aharoni, et al.
0

Deep Recurrent Neural Networks (RNNs) achieve state-of-the-art results in many sequence-to-sequence tasks. However, deep RNNs are difficult to train and suffer from overfitting. We introduce a training method that trains the network gradually, and treats each layer individually, to achieve improved results in language modelling tasks. Training deep LSTM with Gradual Learning (GL) obtains perplexity of 61.7 on the Penn Treebank (PTB) corpus. As far as we know (as for the 20.05.2017), GL improves the best state-of-the-art performance by a single LSTM/RHN model on the word-level PTB dataset.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset