FedMCSA: Personalized Federated Learning via Model Components Self-Attention

by   Qi Guo, et al.

Federated learning (FL) facilitates multiple clients to jointly train a machine learning model without sharing their private data. However, Non-IID data of clients presents a tough challenge for FL. Existing personalized FL approaches rely heavily on the default treatment of one complete model as a basic unit and ignore the significance of different layers on Non-IID data of clients. In this work, we propose a new framework, federated model components self-attention (FedMCSA), to handle Non-IID data in FL, which employs model components self-attention mechanism to granularly promote cooperation between different clients. This mechanism facilitates collaboration between similar model components while reducing interference between model components with large differences. We conduct extensive experiments to demonstrate that FedMCSA outperforms the previous methods on four benchmark datasets. Furthermore, we empirically show the effectiveness of the model components self-attention mechanism, which is complementary to existing personalized FL and can significantly improve the performance of FL.


page 1

page 2

page 3

page 4


FedTP: Federated Learning by Transformer Personalization

Federated learning is an emerging learning paradigm where multiple clien...

Personalized Privacy-Preserving Framework for Cross-Silo Federated Learning

Federated learning (FL) is recently surging as a promising decentralized...

FSL: Federated Supermask Learning

Federated learning (FL) allows multiple clients with (private) data to c...

FedFR: Joint Optimization Federated Framework for Generic and Personalized Face Recognition

Current state-of-the-art deep learning based face recognition (FR) model...

Homogeneous Learning: Self-Attention Decentralized Deep Learning

Federated learning (FL) has been facilitating privacy-preserving deep le...

Please sign up or login with your details

Forgot password? Click here to reset