Do not Mask Randomly: Effective Domain-adaptive Pre-training by Masking In-domain Keywords

07/14/2023
by   Shahriar Golchin, et al.
0

We propose a novel task-agnostic in-domain pre-training method that sits between generic pre-training and fine-tuning. Our approach selectively masks in-domain keywords, i.e., words that provide a compact representation of the target domain. We identify such keywords using KeyBERT (Grootendorst, 2020). We evaluate our approach using six different settings: three datasets combined with two distinct pre-trained language models (PLMs). Our results reveal that the fine-tuned PLMs adapted using our in-domain pre-training strategy outperform PLMs that used in-domain pre-training with random masking as well as those that followed the common pre-train-then-fine-tune paradigm. Further, the overhead of identifying in-domain keywords is reasonable, e.g., 7-15 pre-training time (for two epochs) for BERT Large (Devlin et al., 2019).

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset