FedPAGE: A Fast Local Stochastic Gradient Method for Communication-Efficient Federated Learning
Federated Averaging (FedAvg, also known as Local-SGD) (McMahan et al., 2017) is a classical federated learning algorithm in which clients run multiple local SGD steps before communicating their update to an orchestrating server. We propose a new federated learning algorithm, FedPAGE, able to further reduce the communication complexity by utilizing the recent optimal PAGE method (Li et al., 2021) instead of plain SGD in FedAvg. We show that FedPAGE uses much fewer communication rounds than previous local methods for both federated convex and nonconvex optimization. Concretely, 1) in the convex setting, the number of communication rounds of FedPAGE is O(N^3/4/Sϵ), improving the best-known result O(N/Sϵ) of SCAFFOLD (Karimireddy et al.,2020) by a factor of N^1/4, where N is the total number of clients (usually is very large in federated learning), S is the sampled subset of clients in each communication round, and ϵ is the target error; 2) in the nonconvex setting, the number of communication rounds of FedPAGE is O(√(N)+S/Sϵ^2), improving the best-known result O(N^2/3/S^2/3ϵ^2) of SCAFFOLD (Karimireddy et al.,2020) by a factor of N^1/6S^1/3, if the sampled clients S≤√(N). Note that in both settings, the communication cost for each round is the same for both FedPAGE and SCAFFOLD. As a result, FedPAGE achieves new state-of-the-art results in terms of communication complexity for both federated convex and nonconvex optimization.
READ FULL TEXT