Optimizing Pipelined Computation and Communication for Latency-Constrained Edge Learning

06/11/2019
by   Nicolas Skatchkovsky, et al.
0

Consider a device that is connected to an edge processor via a communication channel. The device holds local data that is to be offloaded to the edge processor so as to train a machine learning model, e.g., for regression or classification. Transmission of the data to the learning processor, as well as training based on Stochastic Gradient Descent (SGD), must be both completed within a time limit. Assuming that communication and computation can be pipelined, this letter investigates the optimal choice for the packet payload size, given the overhead of each data packet transmission and the ratio between the computation and the communication rates. This amounts to a tradeoff between bias and variance, since communicating the entire data set first reduces the bias of the training process but it may not leave sufficient time for learning. Analytical bounds on the expected optimality gap are derived so as to enable an effective optimization, which is validated in numerical results.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/20/2019

Local AdaAlter: Communication-Efficient Stochastic Gradient Descent with Adaptive Learning Rates

Recent years have witnessed the growth of large-scale distributed machin...
research
02/29/2020

Energy-Efficient Federated Edge Learning with Joint Communication and Computation Design

This paper studies a federated edge learning system, in which an edge se...
research
02/28/2020

Decentralized Federated Learning via SGD over Wireless D2D Networks

Federated Learning (FL), an emerging paradigm for fast intelligent acqui...
research
02/28/2019

A block-random algorithm for learning on distributed, heterogeneous data

Most deep learning models are based on deep neural networks with multipl...
research
02/24/2021

Wirelessly Powered Federated Edge Learning: Optimal Tradeoffs Between Convergence and Power Transfer

Federated edge learning (FEEL) is a widely adopted framework for trainin...
research
05/10/2022

All-to-All Encode in Synchronous Systems

We define all-to-all encode, a collective communication operation servin...
research
08/05/2021

Decentralized Federated Learning with Unreliable Communications

Decentralized federated learning, inherited from decentralized learning,...

Please sign up or login with your details

Forgot password? Click here to reset