Harnessing the Power of David against Goliath: Exploring Instruction Data Generation without Using Closed-Source Models

08/24/2023
by   Yue Wang, et al.
0

Instruction tuning is instrumental in enabling Large Language Models (LLMs) to follow user instructions to complete various open-domain tasks. The success of instruction tuning depends on the availability of high-quality instruction data. Owing to the exorbitant cost and substandard quality of human annotation, recent works have been deeply engaged in the exploration of the utilization of powerful closed-source models to generate instruction data automatically. However, these methods carry potential risks arising from the usage requirements of powerful closed-source models, which strictly forbid the utilization of their outputs to develop machine learning models. To deal with this problem, in this work, we explore alternative approaches to generate high-quality instruction data that do not rely on closed-source models. Our exploration includes an investigation of various existing instruction generation methods, culminating in the integration of the most efficient variant with two novel strategies to enhance the quality further. Evaluation results from two benchmarks and the GPT-4 model demonstrate the effectiveness of our generated instruction data, which can outperform Alpaca, a method reliant on closed-source models. We hope that more progress can be achieved in generating high-quality instruction data without using closed-source models.

READ FULL TEXT

page 6

page 8

page 13

research
09/11/2023

TeGit: Generating High-Quality Instruction-Tuning Data with Text-Grounded Task Design

High-quality instruction-tuning data is critical to improving LLM capabi...
research
07/12/2023

Instruction Mining: High-Quality Instruction Data Selection for Large Language Models

Large language models typically undergo two training stages, pretraining...
research
04/17/2023

LongForm: Optimizing Instruction Tuning for Long Text Generation with Corpus Extraction

Instruction tuning enables language models to generalize more effectivel...
research
07/17/2023

AlpaGasus: Training A Better Alpaca with Fewer Data

Large language models (LLMs) obtain instruction-following capability thr...
research
05/09/2023

Towards Building the Federated GPT: Federated Instruction Tuning

While “instruction-tuned" generative large language models (LLMs) have d...
research
08/11/2023

Self-Alignment with Instruction Backtranslation

We present a scalable method to build a high quality instruction followi...
research
08/10/2023

A Preliminary Study of the Intrinsic Relationship between Complexity and Alignment

Training large language models (LLMs) with open-domain instruction data ...

Please sign up or login with your details

Forgot password? Click here to reset