Evaluating Sentence-Level Relevance Feedback for High-Recall Information Retrieval

03/23/2018
by   Haotian Zhang, et al.
0

This study uses a novel simulation framework to evaluate whether the time and effort necessary to achieve high recall using active learning is reduced by presenting the reviewer with isolated sentences, as opposed to full documents, for relevance feedback. Under the weak assumption that more time and effort is required to review an entire document than a single sentence, simulation results indicate that the use of isolated sentences for relevance feedback can yield comparable accuracy and higher efficiency, relative to the state-of-the-art Baseline Model Implementation (BMI) of the AutoTAR Continuous Active Learning ("CAL") method employed in the TREC 2015 and 2016 Total Recall Track.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset