Making Split Learning Resilient to Label Leakage by Potential Energy Loss

10/18/2022
by   Fei Zheng, et al.
0

As a practical privacy-preserving learning method, split learning has drawn much attention in academia and industry. However, its security is constantly being questioned since the intermediate results are shared during training and inference. In this paper, we focus on the privacy leakage problem caused by the trained split model, i.e., the attacker can use a few labeled samples to fine-tune the bottom model, and gets quite good performance. To prevent such kind of privacy leakage, we propose the potential energy loss to make the output of the bottom model become a more `complicated' distribution, by pushing outputs of the same class towards the decision boundary. Therefore, the adversary suffers a large generalization error when fine-tuning the bottom model with only a few leaked labeled samples. Experiment results show that our method significantly lowers the attacker's fine-tuning accuracy, making the split model more resilient to label leakage.

READ FULL TEXT
research
09/21/2022

Measuring and Controlling Split Layer Privacy Leakage Using Fisher Information

Split learning and inference propose to run training/inference of a larg...
research
04/07/2023

Does Prompt-Tuning Language Model Ensure Privacy?

Prompt-tuning has received attention as an efficient tuning method in th...
research
11/25/2021

Gradient Inversion Attack: Leaking Private Labels in Two-Party Split Learning

Split learning is a popular technique used to perform vertical federated...
research
08/18/2023

Defending Label Inference Attacks in Split Learning under Regression Setting

As a privacy-preserving method for implementing Vertical Federated Learn...
research
03/10/2022

Clustering Label Inference Attack against Practical Split Learning

Split learning is deemed as a promising paradigm for privacy-preserving ...
research
10/21/2022

On-Device Model Fine-Tuning with Label Correction in Recommender Systems

To meet the practical requirements of low latency, low cost, and good pr...
research
08/22/2022

Split-U-Net: Preventing Data Leakage in Split Learning for Collaborative Multi-Modal Brain Tumor Segmentation

Split learning (SL) has been proposed to train deep learning models in a...

Please sign up or login with your details

Forgot password? Click here to reset