Learning to Represent Edits

10/31/2018
by   Pengcheng Yin, et al.
4

We introduce the problem of learning distributed representations of edits. By combining a "neural editor" with an "edit encoder", our models learn to represent the salient information of an edit and can be used to apply edits to new inputs. We experiment on natural language and source code edit data. Our evaluation yields promising results that suggest that our neural network models learn to capture the structure and semantics of edits. We hope that this interesting task and data source will inspire other researchers to work further on this problem.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/11/2021

Assessing the Effectiveness of Syntactic Structure to Learn Code Edit Representations

In recent times, it has been shown that one can use code as data to aid ...
research
10/19/2017

Reti bayesiane per lo studio del fenomeno degli incidenti stradali tra i giovani in Toscana

This paper aims to analyse adolescents' road accidents in Tuscany. The a...
research
05/27/2020

A Structural Model for Contextual Code Changes

We address the problem of predicting edit completions based on a learned...
research
12/04/2018

A Retrieve-and-Edit Framework for Predicting Structured Outputs

For the task of generating complex outputs such as source code, editing ...
research
10/23/2018

Neural Network Models for Natural Language Inference Fail to Capture the Semantics of Inference

Neural network models have been very successful for natural language inf...
research
04/04/2019

Neural Networks for Modeling Source Code Edits

Programming languages are emerging as a challenging and interesting doma...
research
07/30/2021

The Minimum Edit Arborescence Problem and Its Use in Compressing Graph Collections [Extended Version]

The inference of minimum spanning arborescences within a set of objects ...

Please sign up or login with your details

Forgot password? Click here to reset