Counting in Language with RNNs

10/29/2018
by   Heng xin Fun, et al.
0

In this paper we examine a possible reason for the LSTM outperforming the GRU on language modeling and more specifically machine translation. We hypothesize that this has to do with counting. This is a consistent theme across the literature of long term dependence, counting, and language modeling for RNNs. Using the simplified forms of language -- Context-Free and Context-Sensitive Languages -- we show how exactly the LSTM performs its counting based on their cell states during inference and why the GRU cannot perform as well.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset