Class-Adaptive Self-Training for Relation Extraction with Incompletely Annotated Training Data

by   Qingyu Tan, et al.
National University of Singapore
Alibaba Group

Relation extraction (RE) aims to extract relations from sentences and documents. Existing relation extraction models typically rely on supervised machine learning. However, recent studies showed that many RE datasets are incompletely annotated. This is known as the false negative problem in which valid relations are falsely annotated as 'no_relation'. Models trained with such data inevitably make similar mistakes during the inference stage. Self-training has been proven effective in alleviating the false negative problem. However, traditional self-training is vulnerable to confirmation bias and exhibits poor performance in minority classes. To overcome this limitation, we proposed a novel class-adaptive re-sampling self-training framework. Specifically, we re-sampled the pseudo-labels for each class by precision and recall scores. Our re-sampling strategy favored the pseudo-labels of classes with high precision and low recall, which improved the overall recall without significantly compromising precision. We conducted experiments on document-level and biomedical relation extraction datasets, and the results showed that our proposed self-training framework consistently outperforms existing competitive methods on the Re-DocRED and ChemDisgene datasets when the training data are incompletely annotated. Our code is released at


page 2

page 7


Document-Level Relation Extraction with Adaptive Focal Loss and Knowledge Distillation

Document-level Relation Extraction (DocRE) is a more challenging task co...

Revisiting DocRED – Addressing the Overlooked False Negative Problem in Relation Extraction

The DocRED dataset is one of the most popular and widely used benchmarks...

Does Recommend-Revise Produce Reliable Annotations? An Analysis on Missing Instances in DocRED

DocRED is a widely used dataset for document-level relation extraction. ...

Revisiting the Negative Data of Distantly Supervised Relation Extraction

Distantly supervision automatically generates plenty of training samples...

Bootstrapping Relation Extractors using Syntactic Search by Examples

The advent of neural-networks in NLP brought with it substantial improve...

STAD: Self-Training with Ambiguous Data for Low-Resource Relation Extraction

We present a simple yet effective self-training approach, named as STAD,...

Enhancing Continual Relation Extraction via Classifier Decomposition

Continual relation extraction (CRE) models aim at handling emerging new ...

Please sign up or login with your details

Forgot password? Click here to reset