Decompiling x86 Deep Neural Network Executables

by   Zhibo Liu, et al.

Due to their widespread use on heterogeneous hardware devices, deep learning (DL) models are compiled into executables by DL compilers to fully leverage low-level hardware primitives. This approach allows DL computations to be undertaken at low cost across a variety of computing platforms, including CPUs, GPUs, and various hardware accelerators. We present BTD (Bin to DNN), a decompiler for deep neural network (DNN) executables. BTD takes DNN executables and outputs full model specifications, including types of DNN operators, network topology, dimensions, and parameters that are (nearly) identical to those of the input models. BTD delivers a practical framework to process DNN executables compiled by different DL compilers and with full optimizations enabled on x86 platforms. It employs learning-based techniques to infer DNN operators, dynamic analysis to reveal network architectures, and symbolic execution to facilitate inferring dimensions and parameters of DNN operators. Our evaluation reveals that BTD enables accurate recovery of full specifications of complex DNNs with millions of parameters (e.g., ResNet). The recovered DNN specifications can be re-compiled into a new DNN executable exhibiting identical behavior to the input executable. We show that BTD can boost two representative attacks, adversarial example generation and knowledge stealing, against DNN executables. We also demonstrate cross-architecture legacy code reuse using BTD, and envision BTD being used for other critical downstream tasks like DNN security hardening and patching.


page 5

page 23


Unveiling Signle-Bit-Flip Attacks on DNN Executables

Recent research has shown that bit-flip attacks (BFAs) can manipulate de...

Daydream: Accurately Estimating the Efficacy of Optimizations for DNN Training

Modern deep neural network (DNN) training jobs use complex and heterogen...

Hardware-Aware Machine Learning: Modeling and Optimization

Recent breakthroughs in Deep Learning (DL) applications have made DL mod...

SWNet: Small-World Neural Networks and Rapid Convergence

Training large and highly accurate deep learning (DL) models is computat...

Hardware and Software Optimizations for Accelerating Deep Neural Networks: Survey of Current Trends, Challenges, and the Road Ahead

Currently, Machine Learning (ML) is becoming ubiquitous in everyday life...

MetaML: Automating Customizable Cross-Stage Design-Flow for Deep Learning Acceleration

This paper introduces a novel optimization framework for deep neural net...

Please sign up or login with your details

Forgot password? Click here to reset