Low-latency Federated Learning with DNN Partition in Distributed Industrial IoT Networks

10/26/2022
by   Xiumei Deng, et al.
0

Federated Learning (FL) empowers Industrial Internet of Things (IIoT) with distributed intelligence of industrial automation thanks to its capability of distributed machine learning without any raw data exchange. However, it is rather challenging for lightweight IIoT devices to perform computation-intensive local model training over large-scale deep neural networks (DNNs). Driven by this issue, we develop a communication-computation efficient FL framework for resource-limited IIoT networks that integrates DNN partition technique into the standard FL mechanism, wherein IIoT devices perform local model training over the bottom layers of the objective DNN, and offload the top layers to the edge gateway side. Considering imbalanced data distribution, we derive the device-specific participation rate to involve the devices with better data distribution in more communication rounds. Upon deriving the device-specific participation rate, we propose to minimize the training delay under the constraints of device-specific participation rate, energy consumption and memory usage. To this end, we formulate a joint optimization problem of device scheduling and resource allocation (i.e. DNN partition point, channel assignment, transmit power, and computation frequency), and solve the long-term min-max mixed integer non-linear programming based on the Lyapunov technique. In particular, the proposed dynamic device scheduling and resource allocation (DDSRA) algorithm can achieve a trade-off to balance the training delay minimization and FL performance. We also provide the FL convergence bound for the DDSRA algorithm with both convex and non-convex settings. Experimental results demonstrate the derived device-specific participation rate in terms of feasibility, and show that the DDSRA algorithm outperforms baselines in terms of test accuracy and convergence time.

READ FULL TEXT

page 1

page 13

page 15

research
09/04/2023

Computation and Communication Efficient Federated Learning over Wireless Networks

Federated learning (FL) allows model training from local data by edge de...
research
05/22/2023

When Computing Power Network Meets Distributed Machine Learning: An Efficient Federated Split Learning Framework

In this paper, we advocate CPN-FedSL, a novel and flexible Federated Spl...
research
03/03/2021

Evaluation and Optimization of Distributed Machine Learning Techniques for Internet of Things

Federated learning (FL) and split learning (SL) are state-of-the-art dis...
research
03/30/2020

End-to-End Evaluation of Federated Learning and Split Learning for Internet of Things

This work is the first attempt to evaluate and compare felderated learni...
research
05/03/2023

Optimal Resource Management for Hierarchical Federated Learning over HetNets with Wireless Energy Transfer

Remote monitoring systems analyze the environment dynamics in different ...
research
01/01/2023

Efficient On-device Training via Gradient Filtering

Despite its importance for federated learning, continuous learning and m...

Please sign up or login with your details

Forgot password? Click here to reset