Supervised Neural Models Revitalize the Open Relation Extraction

09/25/2018
by   Shengbin Jia, et al.
0

Open relation extraction (ORE) remains a challenge to obtain a semantic representation by discovering arbitrary relation tuples from the un-structured text. However, perhaps due to limited data, previous extractors use unsupervised or semi-supervised methods based on pattern matching, which heavily depend on manual work or syntactic parsers and are inefficient or error-cascading. Their development has encountered bottlenecks. Although a few people try to use neural network based models to improve the ORE task performance recently, it is always intractable for ORE to produce supervised systems based on various neural architectures. We analyze and review the neural ORE methods. Further, we construct a large-scale automatically tagging training set and design a tagging scheme to frame ORE as a supervised sequence tagging task. A hybrid neural sequence tagging model (NST) is proposed which combines BiLSTM, CNN and CRF to capture the contextual temporal information, local spatial information, and sentence level tag information of the sequence by using the word and part-of-speech embeddings. Experiments on multiple datasets show that our method is better than most of the existing pattern-based methods and other neural networks based models.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset