Scaling Down to Scale Up: A Guide to Parameter-Efficient Fine-Tuning

03/28/2023
by   Vladislav Lialin, et al.
0

This paper presents a systematic overview and comparison of parameter-efficient fine-tuning methods covering over 40 papers published between February 2019 and February 2023. These methods aim to resolve the infeasibility and impracticality of fine-tuning large language models by only training a small set of parameters. We provide a taxonomy that covers a broad range of methods and present a detailed method comparison with a specific focus on real-life efficiency and fine-tuning multibillion-scale language models.

READ FULL TEXT
research
03/29/2022

Parameter-efficient Fine-tuning for Vision Transformers

In computer vision, it has achieved great success in adapting large-scal...
research
09/04/2023

Prompting or Fine-tuning? A Comparative Study of Large Language Models for Taxonomy Construction

Taxonomies represent hierarchical relations between entities, frequently...
research
05/25/2022

Memorization in NLP Fine-tuning Methods

Large language models are shown to present privacy risks through memoriz...
research
08/20/2023

LMTuner: An user-friendly and highly-integrable Training Framework for fine-tuning Large Language Models

With the burgeoning development in the realm of large language models (L...
research
04/28/2023

Empirical Analysis of the Strengths and Weaknesses of PEFT Techniques for LLMs

As foundation models continue to exponentially scale in size, efficient ...
research
09/15/2020

Current Limitations of Language Models: What You Need is Retrieval

We classify and re-examine some of the current approaches to improve the...

Please sign up or login with your details

Forgot password? Click here to reset