Exploring Parameter-Efficient Fine-Tuning Techniques for Code Generation with Large Language Models

08/21/2023
by   Martin Weyssow, et al.
0

Large Language Models (LLMs) possess impressive capabilities to generate meaningful code snippets given natural language intents in zero-shot, i.e., without the need for specific fine-tuning. In the perspective of unleashing their full potential, prior work has demonstrated the benefits of fine-tuning the models to task-specific data. However, fine-tuning process demands heavy computational costs and is intractable when resources are scarce, especially for models with billions of parameters. In light of these challenges, previous studies explored In-Context Learning (ICL) as an effective strategy to generate contextually appropriate code without fine-tuning. However, it operates at inference time and does not involve learning task-specific parameters, potentially limiting the model's performance on downstream tasks. In this context, we foresee that Parameter-Efficient Fine-Tuning (PEFT) techniques carry a high potential for efficiently specializing LLMs to task-specific data. In this paper, we deliver a comprehensive study of LLMs with the impact of PEFT techniques under the automated code generation scenario. Our experimental results reveal the superiority and potential of such techniques over ICL on a wide range of LLMs in reducing the computational burden and improving performance. Therefore, the study opens opportunities for broader applications of PEFT in software engineering scenarios.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/24/2022

No More Fine-Tuning? An Experimental Evaluation of Prompt Tuning in Code Intelligence

Pre-trained models have been shown effective in many code intelligence t...
research
09/13/2023

Scaled Prompt-Tuning for Few-Shot Natural Language Generation

The increasingly Large Language Models (LLMs) demonstrate stronger langu...
research
07/15/2023

CPET: Effective Parameter-Efficient Tuning for Compressed Large Language Models

Parameter-efficient tuning (PET) has been widely explored in recent year...
research
08/29/2022

Exploring and Evaluating Personalized Models for Code Generation

Large Transformer models achieved the state-of-the-art status for Natura...
research
04/28/2023

Empirical Analysis of the Strengths and Weaknesses of PEFT Techniques for LLMs

As foundation models continue to exponentially scale in size, efficient ...
research
02/10/2023

Impact of Code Language Models on Automated Program Repair

Automated program repair (APR) aims to help developers improve software ...
research
07/16/2023

Is Imitation All You Need? Generalized Decision-Making with Dual-Phase Training

We introduce DualMind, a generalist agent designed to tackle various dec...

Please sign up or login with your details

Forgot password? Click here to reset