Neural Random Projections for Language Modelling

07/02/2018
by   Davide Nunes, et al.
0

Neural network-based language models deal with data sparsity problems by mapping the large discrete space of words into a smaller continuous space of real-valued vectors. By learning distributed vector representations for words, each training sample informs the neural network model about a combinatorial number of other patterns. We exploit the sparsity in natural language even further by encoding each unique input word using a reduced sparse random representation. In this paper, we propose an encoder for discrete inputs that uses random projections to allow for the learning of language models using significantly smaller parameter spaces when compared with similar neural network architectures. Furthermore, random projections also eliminate the dependency between a neural network architecture and the size of a pre-established dictionary. We investigate the properties of our encoding mechanism empirically, by evaluating its performance on the widely used Penn Treebank corpus, using several configurations of baseline feedforward neural network models. We show that guaranteeing approximately equidistant inner products between representations of unique discrete inputs is enough to provide the neural network model with enough information to learn useful distributed representations for these inputs. By not requiring prior enumeration of the lexicon, random projections allow us to face the dynamic and open character of natural languages.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset