SPLBoost: An Improved Robust Boosting Algorithm Based on Self-paced Learning

06/20/2017
by   Kaidong Wang, et al.
0

It is known that Boosting can be interpreted as a gradient descent technique to minimize an underlying loss function. Specifically, the underlying loss being minimized by the traditional AdaBoost is the exponential loss, which is proved to be very sensitive to random noise/outliers. Therefore, several Boosting algorithms, e.g., LogitBoost and SavageBoost, have been proposed to improve the robustness of AdaBoost by replacing the exponential loss with some designed robust loss functions. In this work, we present a new way to robustify AdaBoost, i.e., incorporating the robust learning idea of Self-paced Learning (SPL) into Boosting framework. Specifically, we design a new robust Boosting algorithm based on SPL regime, i.e., SPLBoost, which can be easily implemented by slightly modifying off-the-shelf Boosting packages. Extensive experiments and a theoretical characterization are also carried out to illustrate the merits of the proposed SPLBoost.

READ FULL TEXT
research
10/05/2015

Boosting in the presence of outliers: adaptive classification with non-convex loss functions

This paper examines the role and efficiency of the non-convex loss funct...
research
01/20/2020

SGLB: Stochastic Gradient Langevin Boosting

In this paper, we introduce Stochastic Gradient Langevin Boosting (SGLB)...
research
08/19/2019

Gradient Boosting Machine: A Survey

In this survey, we discuss several different types of gradient boosting ...
research
08/30/2010

Totally Corrective Boosting for Regularized Risk Minimization

Consideration of the primal and dual problems together leads to importan...
research
02/06/2020

Robust Boosting for Regression Problems

The gradient boosting algorithm constructs a regression estimator using ...
research
04/01/2020

Fully-Corrective Gradient Boosting with Squared Hinge: Fast Learning Rates and Early Stopping

Boosting is a well-known method for improving the accuracy of weak learn...
research
03/01/2021

A Multiclass Boosting Framework for Achieving Fast and Provable Adversarial Robustness

Alongside the well-publicized accomplishments of deep neural networks th...

Please sign up or login with your details

Forgot password? Click here to reset