SPIN: Simulated Poisoning and Inversion Network for Federated Learning-Based 6G Vehicular Networks

11/21/2022
by   Sunder Ali Khowaja, et al.
0

The applications concerning vehicular networks benefit from the vision of beyond 5G and 6G technologies such as ultra-dense network topologies, low latency, and high data rates. Vehicular networks have always faced data privacy preservation concerns, which lead to the advent of distributed learning techniques such as federated learning. Although federated learning has solved data privacy preservation issues to some extent, the technique is quite vulnerable to model inversion and model poisoning attacks. We assume that the design of defense mechanism and attacks are two sides of the same coin. Designing a method to reduce vulnerability requires the attack to be effective and challenging with real-world implications. In this work, we propose simulated poisoning and inversion network (SPIN) that leverages the optimization approach for reconstructing data from a differential model trained by a vehicular node and intercepted when transmitted to roadside unit (RSU). We then train a generative adversarial network (GAN) to improve the generation of data with each passing round and global update from the RSU, accordingly. Evaluation results show the qualitative and quantitative effectiveness of the proposed approach. The attack initiated by SPIN can reduce up to 22 on publicly available datasets while just using a single attacker. We assume that revealing the simulation of such attacks would help us find its defense mechanism in an effective manner.

READ FULL TEXT

page 1

page 4

page 5

research
01/23/2023

Backdoor Attacks in Peer-to-Peer Federated Learning

We study backdoor attacks in peer-to-peer federated learning systems on ...
research
04/27/2020

Exploiting Defenses against GAN-Based Feature Inference Attacks in Federated Learning

With the rapid increasing of computing power and dataset volume, machine...
research
11/30/2021

Evaluating Gradient Inversion Attacks and Defenses in Federated Learning

Gradient inversion attack (or input recovery from gradient) is an emergi...
research
12/05/2022

FedCC: Robust Federated Learning against Model Poisoning Attacks

Federated Learning has emerged to cope with raising concerns about priva...
research
03/01/2023

Mitigating Backdoors in Federated Learning with FLD

Federated learning allows clients to collaboratively train a global mode...
research
04/12/2021

Practical Defences Against Model Inversion Attacks for Split Neural Networks

We describe a threat model under which a split network-based federated l...
research
01/12/2022

Get your Foes Fooled: Proximal Gradient Split Learning for Defense against Model Inversion Attacks on IoMT data

The past decade has seen a rapid adoption of Artificial Intelligence (AI...

Please sign up or login with your details

Forgot password? Click here to reset