Improving reference mining in patents with BERT
References in patents to scientific literature provide relevant information for studying the relation between science and technological inventions. These references allow us to answer questions about the types of scientific work that leads to inventions. Most prior work analysing the citations between patents and scientific publications focussed on the front-page citations, which are well structured and provided in the metadata of patent archives such as Google Patents. In the 2019 paper by Verberne et al., the authors evaluate two sequence labelling methods for extracting references from patents: Conditional Random Fields (CRF) and Flair. In this paper we extend that work, by (1) improving the quality of the training data and (2) applying BERT-based models to the problem. We use error analysis throughout our work to find problems in the dataset, improve our models and reason about the types of errors different models are susceptible to. We first discuss the work by Verberne et al. and other related work in Section2. We describe the improvements we make in the dataset, and the new models proposed for this task. We compare the results of our new models with previous results, both on the labelled dataset and a larger unlabelled corpus. We end with a discussion on the characteristics of the results of our new models, followed by a conclusion. Our code and improved dataset are released under an open-source license on github.
READ FULL TEXT