kNN-Adapter: Efficient Domain Adaptation for Black-Box Language Models

02/21/2023
by   Yangsibo Huang, et al.
14

Fine-tuning a language model on a new domain is standard practice for domain adaptation. However, it can be infeasible when it comes to modern large-scale language models such as GPT-3, which can only be accessed through APIs, making it difficult to access the internal parameters of the model. In this paper, we propose kNN-Adapter, a method to effectively adapt these black-box large language models (LLMs) to a new domain. The kNN-Adapter builds on top of the retrieval-augmented language model, and adaptively learns to interpolate the output of the language model with retrieval results from a datastore consisting of the target domain data. Our experiments on four different domains demonstrate that kNN-Adapter significantly improves perplexity, and works particularly well in settings with limited access to LLMs. Additionally, we show that kNN-Adapter is more effective than fine-tuning when the amount of training data is limited. We also release a dataset to encourage further study.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset