Label Inference Attack against Split Learning under Regression Setting

by   Shangyu Xie, et al.

As a crucial building block in vertical Federated Learning (vFL), Split Learning (SL) has demonstrated its practice in the two-party model training collaboration, where one party holds the features of data samples and another party holds the corresponding labels. Such method is claimed to be private considering the shared information is only the embedding vectors and gradients instead of private raw data and labels. However, some recent works have shown that the private labels could be leaked by the gradients. These existing attack only works under the classification setting where the private labels are discrete. In this work, we step further to study the leakage in the scenario of the regression model, where the private labels are continuous numbers (instead of discrete labels in classification). This makes previous attacks harder to infer the continuous labels due to the unbounded output range. To address the limitation, we propose a novel learning-based attack that integrates gradient information and extra learning regularization objectives in aspects of model training properties, which can infer the labels under regression settings effectively. The comprehensive experiments on various datasets and models have demonstrated the effectiveness of our proposed attack. We hope our work can pave the way for future analyses that make the vFL framework more secure.


page 1

page 2

page 3

page 4


Label Leakage and Protection from Forward Embedding in Vertical Federated Learning

Vertical federated learning (vFL) has gained much attention and been dep...

Gradient Inversion Attack: Leaking Private Labels in Two-Party Split Learning

Split learning is a popular technique used to perform vertical federated...

Revealing and Protecting Labels in Distributed Training

Distributed learning paradigms such as federated learning often involve ...

Defending Label Inference Attacks in Split Learning under Regression Setting

As a privacy-preserving method for implementing Vertical Federated Learn...

BlindSage: Label Inference Attacks against Node-level Vertical Federated Graph Neural Networks

Federated learning enables collaborative training of machine learning mo...

Label Leakage and Protection in Two-party Split Learning

In vertical federated learning, two-party split learning has become an i...

All You Need Is Hashing: Defending Against Data Reconstruction Attack in Vertical Federated Learning

Vertical federated learning is a trending solution for multi-party colla...

Please sign up or login with your details

Forgot password? Click here to reset