A Four-Pronged Defense Against Byzantine Attacks in Federated Learning
Federated learning (FL) is a nascent distributed learning paradigm to train a shared global model without violating users' privacy. FL has been shown to be vulnerable to various Byzantine attacks, where malicious participants could independently or collusively upload well-crafted updates to deteriorate the performance of the global model. However, existing defenses could only mitigate part of Byzantine attacks, without providing an all-sided shield for FL. It is difficult to simply combine them as they rely on totally contradictory assumptions. In this paper, we propose FPD, a four-pronged defense against both non-colluding and colluding Byzantine attacks. Our main idea is to utilize absolute similarity to filter updates rather than relative similarity used in existingI works. To this end, we first propose a reliable client selection strategy to prevent the majority of threats in the bud. Then we design a simple but effective score-based detection method to mitigate colluding attacks. Third, we construct an enhanced spectral-based outlier detector to accurately discard abnormal updates when the training data is not independent and identically distributed (non-IID). Finally, we design update denoising to rectify the direction of the slightly noisy but harmful updates. The four sequentially combined modules can effectively reconcile the contradiction in addressing non-colluding and colluding Byzantine attacks. Extensive experiments over three benchmark image classification datasets against four state-of-the-art Byzantine attacks demonstrate that FPD drastically outperforms existing defenses in IID and non-IID scenarios (with 30% improvement on model accuracy).
READ FULL TEXT