Incidental Supervision from Question-Answering Signals

09/01/2019
by   Hangfeng He, et al.
12

Human annotations are costly for many natural language processing (NLP) tasks, especially for those requiring NLP expertise. One promising solution is to use natural language to annotate natural language. However, it remains an open problem how to get supervision signals or learn representations from natural language annotations. This paper studies the case where the annotations are in the format of question-answering (QA) and proposes an effective way to learn useful representations for other tasks. We also find that the representation retrieved from question-answer meaning representation (QAMR) data can almost universally improve on a wide range of tasks, suggesting that such kind of natural language annotations indeed provide unique information on top of modern language models.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset