Backdoor Attack and Defense in Federated Generative Adversarial Network-based Medical Image Synthesis

by   Ruinan Jin, et al.

Deep Learning-based image synthesis techniques have been applied in healthcare research for generating medical images to support open research and augment medical datasets. Training generative adversarial neural networks (GANs) usually require large amounts of training data. Federated learning (FL) provides a way of training a central model using distributed data while keeping raw data locally. However, given that the FL server cannot access the raw data, it is vulnerable to backdoor attacks, an adversarial by poisoning training data. Most backdoor attack strategies focus on classification models and centralized domains. It is still an open question if the existing backdoor attacks can affect GAN training and, if so, how to defend against the attack in the FL setting. In this work, we investigate the overlooked issue of backdoor attacks in federated GANs (FedGANs). The success of this attack is subsequently determined to be the result of some local discriminators overfitting the poisoned data and corrupting the local GAN equilibrium, which then further contaminates other clients when averaging the generator's parameters and yields high generator loss. Therefore, we proposed FedDetect, an efficient and effective way of defending against the backdoor attack in the FL setting, which allows the server to detect the client's adversarial behavior based on their losses and block the malicious clients. Our extensive experiments on two medical datasets with different modalities demonstrate the backdoor attack on FedGANs can result in synthetic images with low fidelity. After detecting and suppressing the detected malicious clients using the proposed defense strategy, we show that FedGANs can synthesize high-quality medical datasets (with labels) for data augmentation to improve classification models' performance.


page 13

page 14

page 16

page 17


Backdoor Attack is A Devil in Federated GAN-based Medical Image Synthesis

Deep Learning-based image synthesis techniques have been applied in heal...

Privacy Inference-Empowered Stealthy Backdoor Attack on Federated Learning under Non-IID Scenarios

Federated learning (FL) naturally faces the problem of data heterogeneit...

Blind leads Blind: A Zero-Knowledge Attack on Federated Learning

Attacks on Federated Learning (FL) can severely reduce the quality of th...

The Devil is in the GAN: Defending Deep Generative Models Against Backdoor Attacks

Deep Generative Models (DGMs) allow users to synthesize data from comple...

Federated Learning for Tabular Data: Exploring Potential Risk to Privacy

Federated Learning (FL) has emerged as a potentially powerful privacy-pr...

Reducing bias and increasing utility by federated generative modeling of medical images using a centralized adversary

We introduce FELICIA (FEderated LearnIng with a CentralIzed Adversary) a...

Information Stealing in Federated Learning Systems Based on Generative Adversarial Networks

An attack on deep learning systems where intelligent machines collaborat...

Please sign up or login with your details

Forgot password? Click here to reset