Client Selection in Nonconvex Federated Learning: Improved Convergence Analysis for Optimal Unbiased Sampling Strategy
Federated learning (FL) is a distributed machine learning paradigm that selects a subset of clients to participate in training to reduce communication burdens. However, partial client participation in FL causes objective inconsistency, which can hinder the convergence, while this objective inconsistency has not been analyzed in existing studies on sampling methods. To tackle this issue, we propose an improved analysis method that focuses on the convergence behavior of the practical participated client's objective. Moreover, based on our convergence analysis, we give a novel unbiased sampling strategy, i.e., FedSRC-D, whose sampling probability is proportional to the client's gradient diversity and local variance. FedSRC-D is provable the optimal unbiased sampling in non-convex settings for non-IID FL with respect to the given bounds. Specifically, FedSRC-D achieves O(G^2/ϵ^2+1/ϵ^2/3) higher than SOTA convergence rate of FedAvg, and O(G^2/ϵ^2) higher than other unbiased sampling methods. We corroborate our results with experiments on both synthetic and real data sets.
READ FULL TEXT