Learning to Reweight Examples for Robust Deep Learning

03/24/2018
by   Mengye Ren, et al.
0

Deep neural networks have been shown to be very powerful modeling tools for many supervised learning tasks involving complex input patterns. However, they can also easily overfit to training set biases and label noises. In addition to various regularizers, example reweighting algorithms are popular solutions to these problems, but they require careful tuning of additional hyperparameters, such as example mining schedules and regularization hyperparameters. In contrast to past reweighting methods, which typically consist of functions of the cost value of each example, in this work we propose a novel meta-learning algorithm that learns to assign weights to training examples based on their gradient directions. To determine the example weights, our method performs a meta gradient descent step on the current mini-batch example weights (which are initialized from zero) to minimize the loss on a clean unbiased validation set. Our proposed method can be easily implemented on any type of deep network, does not require any additional hyperparameter tuning, and achieves impressive performance on class imbalance and corrupted label problems where only a small amount of clean validation data is available.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/20/2019

Tug the Student to Learn Right: Progressive Gradient Correcting by Meta-learner on Corrupted Labels

While deep networks have strong fitting capability to complex input patt...
research
02/20/2019

Push the Student to Learn Right: Progressive Gradient Correcting by Meta-learner on Corrupted Labels

While deep networks have strong fitting capability to complex input patt...
research
09/05/2018

Deep Bilevel Learning

We present a novel regularization approach to train neural networks that...
research
06/29/2021

Adaptive Sample Selection for Robust Learning under Label Noise

Deep Neural Networks (DNNs) have been shown to be susceptible to memoriz...
research
10/22/2021

Prototypical Classifier for Robust Class-Imbalanced Learning

Deep neural networks have been shown to be very powerful methods for man...
research
01/27/2023

Meta-Learning Mini-Batch Risk Functionals

Supervised learning typically optimizes the expected value risk function...
research
04/03/2021

Exponentiated Gradient Reweighting for Robust Training Under Label Noise and Beyond

Many learning tasks in machine learning can be viewed as taking a gradie...

Please sign up or login with your details

Forgot password? Click here to reset