Logic-Guided Data Augmentation and Regularization for Consistent Question Answering

04/21/2020
by   Akari Asai, et al.
0

Many natural language questions require qualitative, quantitative or logical comparisons between two entities or events. This paper addresses the problem of improving the accuracy and consistency of responses to comparison questions by integrating logic rules and neural models. Our method leverages logical and linguistic knowledge to augment labeled training data and then uses a consistency-based regularizer to train the model. Improving the global consistency of predictions, our approach achieves large improvements over previous methods in a variety of question answering (QA) tasks including multiple-choice qualitative reasoning, cause-effect reasoning, and extractive machine reading comprehension. In particular, our method significantly improves the performance of RoBERTa-based models by 1-5 state of the art by around 5-8 violations by 58 learn effectively from limited data.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset