Better Long-Range Dependency By Bootstrapping A Mutual Information Regularizer

05/28/2019
by   Yanshuai Cao, et al.
0

In this work, we develop a novel regularizer to improve the learning of long-range dependency of sequence data. Applied on language modelling, our regularizer expresses the inductive bias that sequence variables should have high mutual information even though the model might not see abundant observations for complex long-range dependency. We show how the `next sentence prediction (classification)' heuristic can be derived in a principled way from our mutual information estimation framework, and be further extended to maximize the mutual information of sequence variables. The proposed approach not only is effective at increasing the mutual information of segments under the learned model but more importantly, leads to a higher likelihood on holdout data, and improved generation quality.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset