Unsupervised Domain Adaptation of Contextualized Embeddings: A Case Study in Early Modern English

04/04/2019
by   Xiaochuang Han, et al.
0

Contextualized word embeddings such as ELMo and BERT provide a foundation for strong performance across a range of natural language processing tasks, in part by pretraining on a large and topically-diverse corpus. However, the applicability of this approach is unknown when the target domain varies substantially from the text used during pretraining. Specifically, we are interested the scenario in which labeled data is available in only a canonical source domain such as newstext, and the target domain is distinct from both the labeled corpus and the pretraining data. To address this scenario, we propose domain-adaptive fine-tuning, in which the contextualized embeddings are adapted by masked language modeling on the target domain. We test this approach on the challenging domain of Early Modern English, which differs substantially from existing pretraining corpora. Domain-adaptive fine-tuning yields an improvement of 4% in part-of-speech tagging accuracy over a BERT baseline, substantially improving on prior work on this task.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset