Can one hear the shape of a neural network?: Snooping the GPU via Magnetic Side Channel

09/15/2021
by   Henrique Teles Maia, et al.
10

Neural network applications have become popular in both enterprise and personal settings. Network solutions are tuned meticulously for each task, and designs that can robustly resolve queries end up in high demand. As the commercial value of accurate and performant machine learning models increases, so too does the demand to protect neural architectures as confidential investments. We explore the vulnerability of neural networks deployed as black boxes across accelerated hardware through electromagnetic side channels. We examine the magnetic flux emanating from a graphics processing unit's power cable, as acquired by a cheap 3 induction sensor, and find that this signal betrays the detailed topology and hyperparameters of a black-box neural network model. The attack acquires the magnetic signal for one query with unknown input values, but known input dimensions. The network reconstruction is possible due to the modular layer sequence in which deep neural networks are evaluated. We find that each layer component's evaluation produces an identifiable magnetic signal signature, from which layer topology, width, function type, and sequence order can be inferred using a suitably trained classifier and a joint consistency optimization based on integer programming. We study the extent to which network specifications can be recovered, and consider metrics for comparing network similarity. We demonstrate the potential accuracy of this side channel attack in recovering the details for a broad range of network architectures, including random designs. We consider applications that may exploit this novel side channel exposure, such as adversarial transfer attacks. In response, we discuss countermeasures to protect against our method and other similar snooping techniques.

READ FULL TEXT

page 1

page 2

page 6

page 12

page 13

page 14

page 15

page 16

research
09/06/2022

Side-channel attack analysis on in-memory computing architectures

In-memory computing (IMC) systems have great potential for accelerating ...
research
10/16/2019

Electro-Magnetic Side-Channel Attack Through Learned Denoising and Classification

This paper proposes an upgraded electro-magnetic side-channel attack tha...
research
05/30/2018

AutoZOOM: Autoencoder-based Zeroth Order Optimization Method for Attacking Black-box Neural Networks

Recent studies have shown that adversarial examples in state-of-the-art ...
research
05/31/2021

QueryNet: An Efficient Attack Framework with Surrogates Carrying Multiple Identities

Deep Neural Networks (DNNs) are acknowledged as vulnerable to adversaria...
research
06/14/2018

Hardware Trojan Attacks on Neural Networks

With the rising popularity of machine learning and the ever increasing d...
research
06/04/2020

A Polynomial Neural network with Controllable Precision and Human-Readable Topology II: Accelerated Approach Based on Expanded Layer

How about converting Taylor series to a network to solve the black-box n...
research
03/10/2019

Neural Network Model Extraction Attacks in Edge Devices by Hearing Architectural Hints

As neural networks continue their reach into nearly every aspect of soft...

Please sign up or login with your details

Forgot password? Click here to reset