Efficient Parallel Split Learning over Resource-constrained Wireless Edge Networks

by   Zheng Lin, et al.

The increasingly deeper neural networks hinder the democratization of privacy-enhancing distributed learning, such as federated learning (FL), to resource-constrained devices. To overcome this challenge, in this paper, we advocate the integration of edge computing paradigm and parallel split learning (PSL), allowing multiple client devices to offload substantial training workloads to an edge server via layer-wise model split. By observing that existing PSL schemes incur excessive training latency and large volume of data transmissions, we propose an innovative PSL framework, namely, efficient parallel split learning (EPSL), to accelerate model training. To be specific, EPSL parallelizes client-side model training and reduces the dimension of local gradients for back propagation (BP) via last-layer gradient aggregation, leading to a significant reduction in server-side training and communication latency. Moreover, by considering the heterogeneous channel conditions and computing capabilities at client devices, we jointly optimize subchannel allocation, power control, and cut layer selection to minimize the per-round latency. Simulation results show that the proposed EPSL framework significantly decreases the training latency needed to achieve a target accuracy compared with the state-of-the-art benchmarks, and the tailored resource management and layer split strategy can considerably reduce latency than the counterpart without optimization.


page 1

page 2

page 3

page 13


Split Federated Learning: Speed up Model Training in Resource-Limited Wireless Networks

In this paper, we propose a novel distributed learning scheme, named gro...

Split Learning over Wireless Networks: Parallel Design and Resource Management

Split learning (SL) is a collaborative learning framework, which can tra...

Optimal Resource Allocation for U-Shaped Parallel Split Learning

Split learning (SL) has emerged as a promising approach for model traini...

When Computing Power Network Meets Distributed Machine Learning: An Efficient Federated Split Learning Framework

In this paper, we advocate CPN-FedSL, a novel and flexible Federated Spl...

Split Learning in 6G Edge Networks

With the proliferation of distributed edge computing resources, the 6G m...

Training Latency Minimization for Model-Splitting Allowed Federated Edge Learning

To alleviate the shortage of computing power faced by clients in trainin...

Communication and Computation Reduction for Split Learning using Asynchronous Training

Split learning is a promising privacy-preserving distributed learning sc...

Please sign up or login with your details

Forgot password? Click here to reset