A Cognitive Regularizer for Language Modeling

05/15/2021
by   Jason Wei, et al.
0

The uniform information density (UID) hypothesis, which posits that speakers prefer utterances that distribute information uniformly across the signal, has gained substantial traction in psycholinguistics as an explanation for certain syntactic, morphological, and prosodic choices. Could we operationalize uniform information density as an inductive bias for statistical language modeling? In this paper, we augment the canonical MLE objective for training language models by encoding UID as regularization. In experiments on ten languages spanning five language families, we find that using UID regularization consistently improves perplexity in language models, having a larger effect when training data is limited. Moreover, via analysis of generated sequences, we find that UID-regularized language models are higher-entropy and produce text that is longer and more lexically diverse. Our results not only suggest that UID is a reasonable inductive bias for language modeling, but also provide an alternative validation of the UID hypothesis using modern-day NLP tools.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset