SLaDe: A Portable Small Language Model Decompiler for Optimized Assembler

by   Jordi Armengol-Estapé, et al.

Decompilation is a well-studied area with numerous high-quality tools available. These are frequently used for security tasks and to port legacy code. However, they regularly generate difficult-to-read programs and require a large amount of engineering effort to support new programming languages and ISAs. Recent interest in neural approaches has produced portable tools that generate readable code. However, to-date such techniques are usually restricted to synthetic programs without optimization, and no models have evaluated their portability. Furthermore, while the code generated may be more readable, it is usually incorrect. This paper presents SLaDe, a Small Language model Decompiler based on a sequence-to-sequence transformer trained over real-world code. We develop a novel tokenizer and exploit no-dropout training to produce high-quality code. We utilize type-inference to generate programs that are more readable and accurate than standard analytic and recent neural approaches. Unlike standard approaches, SLaDe can infer out-of-context types and unlike neural approaches, it generates correct code. We evaluate SLaDe on over 4,000 functions from AnghaBench on two ISAs and at two optimizations levels. SLaDe is up to 6 times more accurate than Ghidra, a state-of-the-art, industrial-strength decompiler and up to 4 times more accurate than the large language model ChatGPT and generates significantly more readable code than both.


page 3

page 7

page 10


Structural Language Models for Any-Code Generation

We address the problem of Any-Code Generation (AnyGen) - generating code...

DeepRapper: Neural Rap Generation with Rhyme and Rhythm Modeling

Rap generation, which aims to produce lyrics and corresponding singing b...

Coder Reviewer Reranking for Code Generation

Sampling diverse programs from a code language model and reranking with ...

PAC Prediction Sets for Large Language Models of Code

Prediction sets have recently been shown to be a promising strategy for ...

Planning with Large Language Models for Code Generation

Existing large language model-based code generation pipelines typically ...

Code as Policies: Language Model Programs for Embodied Control

Large language models (LLMs) trained on code completion have been shown ...

Types for Tables: A Language Design Benchmark

Context: Tables are ubiquitous formats for data. Therefore, techniques f...

Please sign up or login with your details

Forgot password? Click here to reset