A Surprising Linear Relationship Predicts Test Performance in Deep Networks

by   Qianli Liao, et al.

Given two networks with the same training loss on a dataset, when would they have drastically different test losses and errors? Better understanding of this question of generalization may improve practical applications of deep networks. In this paper we show that with cross-entropy loss it is surprisingly simple to induce significantly different generalization performances for two networks that have the same architecture, the same meta parameters and the same training error: one can either pretrain the networks with different levels of "corrupted" data or simply initialize the networks with weights of different Gaussian standard deviations. A corollary of recent theoretical results on overfitting shows that these effects are due to an intrinsic problem of measuring test performance with a cross-entropy/exponential-type loss, which can be decomposed into two components both minimized by SGD -- one of which is not related to expected classification performance. However, if we factor out this component of the loss, a linear relationship emerges between training and test losses. Under this transformation, classical generalization bounds are surprisingly tight: the empirical/training loss is very close to the expected/test loss. Furthermore, the empirical relation between classification error and normalized cross-entropy loss seem to be approximately monotonic


page 1

page 24

page 25


Cut your Losses with Squentropy

Nearly all practical neural models for classification are trained using ...

Introducing One Sided Margin Loss for Solving Classification Problems in Deep Networks

This paper introduces a new loss function, OSM (One-Sided Margin), to so...

Using Focal Loss to Fight Shallow Heuristics: An Empirical Analysis of Modulated Cross-Entropy in Natural Language Inference

There is no such thing as a perfect dataset. In some datasets, deep neur...

Adaptively Solving the Local-Minimum Problem for Deep Neural Networks

This paper aims to overcome a fundamental problem in the theory and appl...

Understanding Square Loss in Training Overparametrized Neural Network Classifiers

Deep learning has achieved many breakthroughs in modern classification t...

Algorithms and Theory for Multiple-Source Adaptation

This work includes a number of novel contributions for the multiple-sour...

No Cost Likelihood Manipulation at Test Time for Making Better Mistakes in Deep Networks

There has been increasing interest in building deep hierarchy-aware clas...

Please sign up or login with your details

Forgot password? Click here to reset