Improving Natural Language Inference with a Pretrained Parser

09/18/2019
by   Deric Pang, et al.
0

We introduce a novel approach to incorporate syntax into natural language inference (NLI) models. Our method uses contextual token-level vector representations from a pretrained dependency parser. Like other contextual embedders, our method is broadly applicable to any neural model. We experiment with four strong NLI models (decomposable attention model, ESIM, BERT, and MT-DNN), and show consistent benefit to accuracy across three NLI benchmarks.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset