Towards Characterizing and Limiting Information Exposure in DNN Layers

07/13/2019
by   Fan Mo, et al.
0

Pre-trained Deep Neural Network (DNN) models are increasingly used in smartphones and other user devices to enable prediction services, leading to potential disclosures of (sensitive) information from training data captured inside these models. Based on the concept of generalization error, we propose a framework to measure the amount of sensitive information memorized in each layer of a DNN. Our results show that, when considered individually, the last layers encode a larger amount of information from the training data compared to the first layers. We find that, while the neuron of convolutional layers can expose more (sensitive) information than that of fully connected layers, the same DNN architecture trained with different datasets has similar exposure per layer. We evaluate an architecture to protect the most sensitive layers within the memory limits of Trusted Execution Environment (TEE) against potential white-box membership inference attacks without the significant computational overhead.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/12/2020

DarkneTZ: Towards Model Privacy at the Edge using Trusted Execution Environments

We present DarkneTZ, a framework that uses an edge device's Trusted Exec...
research
11/11/2018

A Multi-modal Deep Neural Network approach to Bird-song identification

We present a multi-modal Deep Neural Network (DNN) approach for bird son...
research
04/30/2021

Memory-Efficient Deep Learning Inference in Trusted Execution Environments

This study identifies and proposes techniques to alleviate two key bottl...
research
06/02/2022

FACM: Correct the Output of Deep Neural Network with Middle Layers Features against Adversarial Samples

In the strong adversarial attacks against deep neural network (DNN), the...
research
03/13/2020

Partial Weight Adaptation for Robust DNN Inference

Mainstream video analytics uses a pre-trained DNN model with an assumpti...
research
10/15/2019

Reduced-Order Modeling of Deep Neural Networks

We introduce a new method for speeding up the inference of deep neural n...
research
08/29/2022

Demystifying Arch-hints for Model Extraction: An Attack in Unified Memory System

The deep neural network (DNN) models are deemed confidential due to thei...

Please sign up or login with your details

Forgot password? Click here to reset