Robust Sampling in Deep Learning

06/04/2020
by   Aurora Cobo Aguilera, et al.
0

Deep learning requires regularization mechanisms to reduce overfitting and improve generalization. We address this problem by a new regularization method based on distributional robust optimization. The key idea is to modify the contribution from each sample for tightening the empirical risk bound. During the stochastic training, the selection of samples is done according to their accuracy in such a way that the worst performed samples are the ones that contribute the most in the optimization. We study different scenarios and show the ones where it can make the convergence faster or increase the accuracy.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset