CodeRL: Mastering Code Generation through Pretrained Models and Deep Reinforcement Learning

07/05/2022
by   Hung Le, et al.
0

Program synthesis or code generation aims to generate a program that satisfies a problem specification. Recent approaches using large-scale pretrained language models (LMs) have shown promising results, yet they have some critical limitations. In particular, they often follow a standard supervised fine-tuning procedure to train a code generation model only from the pairs of natural-language problem descriptions and ground-truth programs. Such paradigm largely ignores some important but potentially useful signals in the problem specification such as unit tests, which thus often results in poor performance when solving complex unseen coding tasks. To address the limitations, we propose "CodeRL", a new framework for program synthesis tasks through pretrained LMs and deep reinforcement learning (RL). Specifically, during training, we treat the code-generating LM as an actor network, and introduce a critic network that is trained to predict the functional correctness of generated programs and provide dense feedback signals to the actor. During inference, we introduce a new generation procedure with a critical sampling strategy that allows a model to automatically regenerate programs based on feedback from example unit tests and critic scores. For the model backbones, we extended the encoder-decoder architecture of CodeT5 with enhanced learning objectives, larger model sizes, and better pretraining data. Our method not only achieves new SOTA results on the challenging APPS benchmark, but also shows strong zero-shot transfer capability with new SOTA results on the simpler MBPP benchmark.

READ FULL TEXT

page 1

page 16

research
01/31/2023

Execution-based Code Generation using Deep Reinforcement Learning

The utilization of programming language (PL) models, pretrained on large...
research
07/10/2023

RLTF: Reinforcement Learning from Unit Test Feedback

The goal of program synthesis, or code generation, is to generate execut...
research
05/13/2023

CodeT5+: Open Code Large Language Models for Code Understanding and Generation

Large language models (LLMs) pretrained on vast source code have achieve...
research
03/28/2023

Improving Code Generation by Training with Natural Language Feedback

The potential for pre-trained large language models (LLMs) to use natura...
research
08/16/2021

Program Synthesis with Large Language Models

This paper explores the limits of the current generation of large langua...
research
11/17/2018

Improving Automatic Source Code Summarization via Deep Reinforcement Learning

Code summarization provides a high level natural language description of...
research
05/25/2023

Tuning Models of Code with Compiler-Generated Reinforcement Learning Feedback

Large Language Models (LLMs) pre-trained on code have recently emerged a...

Please sign up or login with your details

Forgot password? Click here to reset