Unsupervised Domain Adaptation of Language Models for Reading Comprehension

11/25/2019
by   Kosuke Nishida, et al.
0

This study tackles unsupervised domain adaptation of reading comprehension (UDARC). Reading comprehension (RC) is a task to learn the capability for question answering with textual sources. State-of-the-art models on RC still do not have general linguistic intelligence; i.e., their accuracy worsens for out-domain datasets that are not used in the training. We hypothesize that this discrepancy is caused by a lack of the language modeling (LM) capability for the out-domain. The UDARC task allows models to use supervised RC training data in the source domain and only unlabeled passages in the target domain. To solve the UDARC problem, we provide two domain adaptation models. The first one learns the out-domain LM and in-domain RC task sequentially. The second one is the proposed model that uses a multi-task learning approach of LM and RC. The models can retain both the RC capability acquired from the supervised data in the source domain and the LM capability from the unlabeled data in the target domain. We evaluated the models on UDARC with five datasets in different domains. The models outperformed the model without domain adaptation. In particular, the proposed model yielded an improvement of 4.3/4.2 points in EM/F1 in an unseen biomedical domain.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/13/2019

Unsupervised Domain Adaptation on Reading Comprehension

Reading comprehension (RC) has been studied in a variety of datasets wit...
research
06/14/2022

Task Transfer and Domain Adaptation for Zero-Shot Question Answering

Pretrained language models have shown success in various areas of natura...
research
02/26/2022

BioADAPT-MRC: Adversarial Learning-based Domain Adaptation Improves Biomedical Machine Reading Comprehension Task

Motivation: Biomedical machine reading comprehension (biomedical-MRC) ai...
research
05/15/2022

Not to Overfit or Underfit? A Study of Domain Generalization in Question Answering

Machine learning models are prone to overfitting their source (training)...
research
11/01/2019

Forget Me Not: Reducing Catastrophic Forgetting for Domain Adaptation in Reading Comprehension

The creation of large-scale open domain reading comprehension data sets ...
research
08/25/2020

Continual Domain Adaptation for Machine Reading Comprehension

Machine reading comprehension (MRC) has become a core component in a var...
research
08/24/2019

Adversarial Domain Adaptation for Machine Reading Comprehension

In this paper, we focus on unsupervised domain adaptation for Machine Re...

Please sign up or login with your details

Forgot password? Click here to reset