Quoref: A Reading Comprehension Dataset with Questions Requiring Coreferential Reasoning

08/16/2019
by   Pradeep Dasigi, et al.
0

Machine comprehension of texts longer than a single sentence often requires coreference resolution. However, most current reading comprehension benchmarks do not contain complex coreferential phenomena and hence fail to evaluate the ability of models to resolve coreference. We present a new crowdsourced dataset containing 15K span-selection questions that require resolving coreference among entities in about 3.5K English paragraphs from Wikipedia. Obtaining questions focused on such phenomena is challenging, because it is hard to avoid lexical cues that shortcut complex reasoning. We deal with this issue by using a strong baseline model as an adversary in the crowdsourcing loop, which helps crowdworkers avoid writing questions with exploitable surface cues. We show that state-of-the-art reading comprehension models perform poorly on this benchmark---the best model performance is 49 F1, while the estimated human performance is 87.2 F1.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset