Instruction Mining: High-Quality Instruction Data Selection for Large Language Models

07/12/2023
by   Yihan Cao, et al.
0

Large language models typically undergo two training stages, pretraining and finetuning. Despite that large-scale pretraining endows the model with strong capabilities to generate natural language responses, these pretrained models can still fail to understand human instructions at times. To enhance language models' ability of interpreting and responding to instructions, instruction finetuning has emerged as a critical method in this area. Recent studies found that large language models can be finetuned to perform well even with a small amount of high-quality instruction-following data. However, the selection of high-quality datasets for finetuning language models still lacks clear guidelines to follow. In this paper, we propose InstructMining, a linear rule for evaluating instruction-following data quality. We formulate InstructMining using specific natural language indicators. To investigate the relationship between data quality and these indicators, we further conduct extensive finetuning experiments. The experiment results are then applied to estimating parameters in InstructMining. To further investigate its performance, we use InstructMining to select high-quality data from unseen datasets. Results demonstrate that InstructMining can help select relatively high-quality samples from various instruction-following datasets. Compared to models finetuned on unfiltered datasets, models finetuned on InstructMining selected datasets perform better on 42.5

READ FULL TEXT

page 1

page 2

page 3

page 4

research
08/23/2023

Diffusion Language Models Can Perform Many Tasks with Scaling and Instruction-Finetuning

The recent surge of generative AI has been fueled by the generative powe...
research
05/18/2023

LIMA: Less Is More for Alignment

Large language models are trained in two stages: (1) unsupervised pretra...
research
05/11/2023

Recommendation as Instruction Following: A Large Language Model Empowered Recommendation Approach

In the past decades, recommender systems have attracted much attention i...
research
07/17/2023

AlpaGasus: Training A Better Alpaca with Fewer Data

Large language models (LLMs) obtain instruction-following capability thr...
research
03/27/2023

Unified Text Structuralization with Instruction-tuned Language Models

Text structuralization is one of the important fields of natural languag...
research
08/24/2023

Harnessing the Power of David against Goliath: Exploring Instruction Data Generation without Using Closed-Source Models

Instruction tuning is instrumental in enabling Large Language Models (LL...
research
08/11/2023

Self-Alignment with Instruction Backtranslation

We present a scalable method to build a high quality instruction followi...

Please sign up or login with your details

Forgot password? Click here to reset