Exploring Continual Learning for Code Generation Models

07/05/2023
by   Prateek Yadav, et al.
0

Large-scale code generation models such as Codex and CodeT5 have achieved impressive performance. However, libraries are upgraded or deprecated very frequently and re-training large-scale language models is computationally expensive. Therefore, Continual Learning (CL) is an important aspect that remains underexplored in the code domain. In this paper, we introduce a benchmark called CodeTask-CL that covers a wide range of tasks, including code generation, translation, summarization, and refinement, with different input and output programming languages. Next, on our CodeTask-CL benchmark, we compare popular CL techniques from NLP and Vision domains. We find that effective methods like Prompt Pooling (PP) suffer from catastrophic forgetting due to the unstable training of the prompt selection mechanism caused by stark distribution shifts in coding tasks. We address this issue with our proposed method, Prompt Pooling with Teacher Forcing (PP-TF), that stabilizes training by enforcing constraints on the prompt selection mechanism and leads to a 21.54 a training pipeline that can be used for CL on code models, which we believe can motivate further development of CL methods for code models. Our code is available at https://github.com/amazon-science/codetaskcl-pptf

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/29/2022

TemporalWiki: A Lifelong Benchmark for Training and Evaluating Ever-Evolving Language Models

Language Models (LMs) become outdated as the world changes; they often f...
research
08/20/2021

Online Continual Learning with Natural Distribution Shifts: An Empirical Study with Visual Data

Continual learning is the problem of learning and retaining knowledge th...
research
11/23/2022

CODA-Prompt: COntinual Decomposed Attention-based Prompting for Rehearsal-Free Continual Learning

Computer vision models suffer from a phenomenon known as catastrophic fo...
research
06/28/2022

NERDA-Con: Extending NER models for Continual Learning – Integrating Distinct Tasks and Updating Distribution Shifts

With increasing applications in areas such as biomedical information ext...
research
05/22/2022

CIRCLE: Continual Repair across Programming Languages

Automatic Program Repair (APR) aims at fixing buggy source code with les...
research
05/17/2023

Selective Amnesia: A Continual Learning Approach to Forgetting in Deep Generative Models

The recent proliferation of large-scale text-to-image models has led to ...
research
05/16/2023

Rapid Adaptation in Online Continual Learning: Are We Evaluating It Right?

We revisit the common practice of evaluating adaptation of Online Contin...

Please sign up or login with your details

Forgot password? Click here to reset