Serverless Federated AUPRC Optimization for Multi-Party Collaborative Imbalanced Data Mining

by   Xidong Wu, et al.

Multi-party collaborative training, such as distributed learning and federated learning, is used to address the big data challenges. However, traditional multi-party collaborative training algorithms were mainly designed for balanced data mining tasks and are intended to optimize accuracy (e.g., cross-entropy). The data distribution in many real-world applications is skewed and classifiers, which are trained to improve accuracy, perform poorly when applied to imbalanced data tasks since models could be significantly biased toward the primary class. Therefore, the Area Under Precision-Recall Curve (AUPRC) was introduced as an effective metric. Although single-machine AUPRC maximization methods have been designed, multi-party collaborative algorithm has never been studied. The change from the single-machine to the multi-party setting poses critical challenges. To address the above challenge, we study the serverless multi-party collaborative AUPRC maximization problem since serverless multi-party collaborative training can cut down the communications cost by avoiding the server node bottleneck, and reformulate it as a conditional stochastic optimization problem in a serverless multi-party collaborative learning setting and propose a new ServerLess biAsed sTochastic gradiEnt (SLATE) algorithm to directly optimize the AUPRC. After that, we use the variance reduction technique and propose ServerLess biAsed sTochastic gradiEnt with Momentum-based variance reduction (SLATE-M) algorithm to improve the convergence rate, which matches the best theoretical convergence result reached by the single-machine online method. To the best of our knowledge, this is the first work to solve the multi-party collaborative AUPRC maximization problem.


page 1

page 2

page 3

page 4


Federated Compositional Deep AUC Maximization

Federated learning has attracted increasing attention due to the promise...

Linear Speedup in Personalized Collaborative Learning

Personalization in federated learning can improve the accuracy of a mode...

Communication-Efficient Adam-Type Algorithms for Distributed Data Mining

Distributed data mining is an emerging research topic to effectively and...

Federated Learning Using Variance Reduced Stochastic Gradient for Probabilistically Activated Agents

This paper proposes an algorithm for Federated Learning (FL) with a two-...

Momentum Accelerates the Convergence of Stochastic AUPRC Maximization

In this paper, we study stochastic optimization of areas under precision...

On the Convergence of Momentum-Based Algorithms for Federated Stochastic Bilevel Optimization Problems

In this paper, we studied the federated stochastic bilevel optimization ...

Doubly-stochastic mining for heterogeneous retrieval

Modern retrieval problems are characterised by training sets with potent...

Please sign up or login with your details

Forgot password? Click here to reset