Automated Source Code Generation and Auto-completion Using Deep Learning: Comparing and Discussing Current Language-Model-Related Approaches

by   Juan Cruz-Benito, et al.

In recent years, the use of deep learning in language models gained much attention. Some research projects claim that they can generate text that can be interpreted as human-writing, enabling new possibilities in many application areas. Among the different areas related to language processing, one of the most notable in applying this type of modeling is programming languages. For years, the Machine Learning community has been researching this software engineering area, pursuing goals like applying different approaches to auto-complete, generate, fix, or evaluate code programmed by humans. Considering the increasing popularity of the Deep-Learning-enabled language models approach, we detected a lack of empirical papers that compare different deep learning architectures to create and use language models based on programming code. This paper compares different neural network architectures like AWD-LSTMs, AWD-QRNNs, and Transformer while using transfer learning and different tokenizations to see how they behave in building language models using a Python dataset for code generation and filling mask tasks. Considering the results, we discuss each approach's different strengths and weaknesses and what gaps we find to evaluate the language models or apply them in a real programming context.


page 1

page 2

page 3

page 4


The (ab)use of Open Source Code to Train Large Language Models

In recent years, Large Language Models (LLMs) have gained significant po...

Inaccessible Neural Language Models Could Reinvigorate Linguistic Nativism

Large Language Models (LLMs) have been making big waves in the machine l...

Is Writing Prompts Really Making Art?

In recent years Generative Machine Learning systems have advanced signif...

TASTY: A Transformer based Approach to Space and Time complexity

Code based Language Models (LMs) have shown very promising results in th...

Who's to say what's funny? A computer using Language Models and Deep Learning, That's Who!

Humor is a defining characteristic of human beings. Our goal is to devel...

Multi-Method Self-Training: Improving Code Generation With Text, And Vice Versa

Large Language Models have many methods for solving the same problem. Th...

Language Models are not Models of Language

Natural Language Processing (NLP) has become one of the leading applicat...

Please sign up or login with your details

Forgot password? Click here to reset