Emphasis Regularisation by Gradient Rescaling for Training Deep Neural Networks with Noisy Labels

by   Xinshao Wang, et al.

It is fundamental and challenging to train robust and accurate Deep Neural Networks (DNNs) when noisy labels exist. Although great progress has been made, there is still one crucial research question which is not thoroughly explored yet: What training examples should be focused and how much more should they be emphasised when training DNNs under label noise? In this work, we study this question and propose gradient rescaling (GR) to solve it. GR modifies the magnitude of logit vector's gradient to emphasise on relatively easier training data points when severe noise exists, which functions as explicit emphasis regularisation to improve the generalisation performance of DNNs. Apart from regularisation, we also interpret GR from the perspectives of sample reweighting and designing robust loss functions. Therefore, our proposed GR helps connect these three approaches in the literature. We empirically demonstrate that GR is highly noise-robust and outperforms the state-of-the-art noise-tolerant algorithms by a large margin, e.g., increasing 7 with 40 regularisors. Furthermore, we present comprehensive ablation studies to explore the behaviours of GR under different cases, which is informative for applying GR in real-world scenarios.


page 1

page 2

page 3

page 4


Normalized Loss Functions for Deep Learning with Noisy Labels

Robust loss functions are essential for training accurate deep neural ne...

Generalized Cross Entropy Loss for Training Deep Neural Networks with Noisy Labels

Deep neural networks (DNNs) have achieved tremendous success in a variet...

Generation and Analysis of Feature-Dependent Pseudo Noise for Training Deep Neural Networks

Training Deep neural networks (DNNs) on noisy labeled datasets is a chal...

Improving MAE against CCE under Label Noise

Label noise is inherent in many deep learning tasks when the training se...

Synthetic vs Real: Deep Learning on Controlled Noise

Performing controlled experiments on noisy data is essential in thorough...

Tackling Instance-Dependent Label Noise via a Universal Probabilistic Model

The drastic increase of data quantity often brings the severe decrease o...

Deep Learning is Provably Robust to Symmetric Label Noise

Deep neural networks (DNNs) are capable of perfectly fitting the trainin...

Please sign up or login with your details

Forgot password? Click here to reset