Distributed Layer-Partitioned Training for Privacy-Preserved Deep Learning

04/12/2019
by   Chun-Hsien Yu, et al.
0

Deep Learning techniques have achieved remarkable results in many domains. Often, training deep learning models requires large datasets, which may require sensitive information to be uploaded to the cloud to accelerate training. To adequately protect sensitive information, we propose distributed layer-partitioned training with step-wise activation functions for privacy-preserving deep learning. Experimental results attest our method to be simple and effective.

READ FULL TEXT
research
08/04/2022

Privacy-Preserving Chaotic Extreme Learning Machine with Fully Homomorphic Encryption

The Machine Learning and Deep Learning Models require a lot of data for ...
research
02/05/2019

Disguised-Nets: Image Disguising for Privacy-preserving Deep Learning

Due to the high training costs of deep learning, model developers often ...
research
07/01/2016

Deep Learning with Differential Privacy

Machine learning techniques based on neural networks are achieving remar...
research
03/30/2020

A Privacy-Preserving Distributed Architecture for Deep-Learning-as-a-Service

Deep-learning-as-a-service is a novel and promising computing paradigm a...
research
06/06/2019

Practical Deep Learning with Bayesian Principles

Bayesian methods promise to fix many shortcomings of deep learning, but ...
research
12/20/2020

DISCO: Dynamic and Invariant Sensitive Channel Obfuscation for deep neural networks

Recent deep learning models have shown remarkable performance in image c...
research
07/03/2018

Securing Input Data of Deep Learning Inference Systems via Partitioned Enclave Execution

Deep learning systems have been widely deployed as backend engines of ar...

Please sign up or login with your details

Forgot password? Click here to reset