On the Generalizability of Neural Program Analyzers with respect to Semantic-Preserving Program Transformations

07/31/2020
by   Md Rafiqul Islam Rabin, et al.
0

With the prevalence of publicly available source code repositories to train deep neural network models, neural program analyzers can do well in source code analysis tasks such as predicting method names in given programs that cannot be easily done by traditional program analyzers. Although such analyzers have been tested on various existing datasets, the extent in which they generalize to unforeseen source code is largely unknown. Since it is impossible to test neural program analyzers on all unforeseen programs, in this paper, we propose to evaluate the generalizability of neural program analyzers with respect to semantic-preserving transformations: a generalizable neural program analyzer should perform equally well on programs that are of the same semantics but of different lexical appearances and syntactical structures. More specifically, we compare the results of various neural program analyzers for the method name prediction task on programs before and after automated semantic-preserving transformations. We use three Java datasets of different sizes and three state-of-the-art neural network models for code, namely code2vec, code2seq, and Gated Graph Neural Networks (GGNN), to build nine such neural program analyzers for evaluation. Our results show that even with small semantically preserving changes to the programs, these neural program analyzers often fail to generalize their performance. Our results also suggest that neural program analyzers based on data and control dependencies in programs generalize better than neural program analyzers based only on abstract syntax trees. On the positive side, we observe that as the size of training dataset grows and diversifies the generalizability of correct predictions produced by the analyzers can be improved too.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/15/2020

Evaluation of Generalizability of Neural Program Analyzers under Semantic-Preserving Transformations

The abundance of publicly available source code repositories, in conjunc...
research
04/01/2021

Modular Tree Network for Source Code Representation Learning

Learning representation for source code is a foundation of many program ...
research
05/27/2019

COSET: A Benchmark for Evaluating Neural Program Embeddings

Neural program embedding can be helpful in analyzing large software, a t...
research
03/09/2021

Mining Program Properties From Neural Networks Trained on Source Code Embeddings

In this paper, we propose a novel approach for mining different program ...
research
10/26/2021

Neural Program Generation Modulo Static Analysis

State-of-the-art neural models of source code tend to be evaluated on th...
research
09/05/2020

TreeCaps: Tree-Based Capsule Networks for Source Code Processing

Recently program learning techniques have been proposed to process sourc...
research
01/29/2021

The significance of user-defined identifiers in Java source code authorship identification

When writing source code, programmers have varying levels of freedom whe...

Please sign up or login with your details

Forgot password? Click here to reset