SGD with Hardness Weighted Sampling for Distributionally Robust Deep Learning

01/08/2020
by   Lucas Fidon, et al.
9

Distributionally Robust Optimization (DRO) has been proposed as an alternative to Empirical Risk Minimization (ERM) in order to account for potential biases in the training data distribution. However, its use in deep learning has been severely restricted due to the relative inefficiency of the optimizers available for DRO in comparison to the wide-spread Stochastic Gradient Descent (SGD) based optimizers for deep learning with ERM. We propose SGD with Hardness weighted sampling, an efficient optimization method for machine learning with DRO with a focus on deep learning. In this work, we propose SGD with hardness weighted sampling, a principled and efficient optimization method for DRO in machine learning that is particularly suited in the context of deep learning. We show that our optimization method can be interpreted as a principled Hard Example Mining strategy. Similar to an online hard example mining strategy in essence and in practice, the proposed algorithm is straightforward to implement and computationally as efficient as SGD-based optimizers used for deep learning. It only requires adding a softmax layer and maintaining a history of the loss values for each training example to compute adaptive sampling probabilities. In contrast to typical ad hoc hard mining approaches, and exploiting recent theoretical results in deep learning optimization, we We also prove the convergence of our DRO algorithm for over-parameterized deep learning networks with ReLU activation and finite number of layers and parameters. Preliminary results demonstrate the feasibility and usefulness of our approach.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/11/2019

Accelerating Minibatch Stochastic Gradient Descent using Typicality Sampling

Machine learning, especially deep neural networks, has been rapidly deve...
research
06/30/2015

Online Learning to Sample

Stochastic Gradient Descent (SGD) is one of the most widely used techniq...
research
02/01/2017

On SGD's Failure in Practice: Characterizing and Overcoming Stalling

Stochastic Gradient Descent (SGD) is widely used in machine learning pro...
research
08/26/2018

Deep Learning: Computational Aspects

In this article we review computational aspects of Deep Learning (DL). D...
research
06/21/2023

Empirical Risk Minimization with Shuffled SGD: A Primal-Dual Perspective and Improved Bounds

Stochastic gradient descent (SGD) is perhaps the most prevalent optimiza...
research
08/10/2018

Ensemble Kalman Inversion: A Derivative-Free Technique For Machine Learning Tasks

The standard probabilistic perspective on machine learning gives rise to...

Please sign up or login with your details

Forgot password? Click here to reset