A Secure Aggregation for Federated Learning on Long-Tailed Data

07/17/2023
by   Yanna Jiang, et al.
0

As a distributed learning, Federated Learning (FL) faces two challenges: the unbalanced distribution of training data among participants, and the model attack by Byzantine nodes. In this paper, we consider the long-tailed distribution with the presence of Byzantine nodes in the FL scenario. A novel two-layer aggregation method is proposed for the rejection of malicious models and the advisable selection of valuable models containing tail class data information. We introduce the concept of think tank to leverage the wisdom of all participants. Preliminary experiments validate that the think tank can make effective model selections for global aggregation.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset