Reducing Communication for Split Learning by Randomized Top-k Sparsification

05/29/2023
by   Fei Zheng, et al.
0

Split learning is a simple solution for Vertical Federated Learning (VFL), which has drawn substantial attention in both research and application due to its simplicity and efficiency. However, communication efficiency is still a crucial issue for split learning. In this paper, we investigate multiple communication reduction methods for split learning, including cut layer size reduction, top-k sparsification, quantization, and L1 regularization. Through analysis of the cut layer size reduction and top-k sparsification, we further propose randomized top-k sparsification, to make the model generalize and converge better. This is done by selecting top-k elements with a large probability while also having a small probability to select non-top-k elements. Empirical results show that compared with other communication-reduction methods, our proposed randomized top-k sparsification achieves a better model performance under the same compression level.

READ FULL TEXT

page 5

page 10

research
09/18/2019

Detailed comparison of communication efficiency of split learning and federated learning

We compare communication efficiencies of two compelling distributed mach...
research
06/10/2021

Multi-VFL: A Vertical Federated Learning System for Multiple Data and Label Owners

Vertical Federated Learning (VFL) refers to the collaborative training o...
research
01/28/2022

FedLite: A Scalable Approach for Federated Learning on Resource-constrained Clients

In classical federated learning, the clients contribute to the overall t...
research
11/12/2022

Quantum Split Neural Network Learning using Cross-Channel Pooling

In recent years, quantum has been attracted by various fields such as qu...
research
07/25/2023

SplitFed resilience to packet loss: Where to split, that is the question

Decentralized machine learning has broadened its scope recently with the...
research
06/04/2019

Snackjack: A toy model of blackjack

Snackjack is a highly simplified version of blackjack that was proposed ...
research
12/11/2021

Server-Side Local Gradient Averaging and Learning Rate Acceleration for Scalable Split Learning

In recent years, there have been great advances in the field of decentra...

Please sign up or login with your details

Forgot password? Click here to reset