Neural Networks with Finite Intrinsic Dimension have no Spurious Valleys

02/18/2018
by   Luca Venturi, et al.
0

Neural networks provide a rich class of high-dimensional, non-convex optimization problems. Despite their non-convexity, gradient-descent methods often successfully optimize these models. This has motivated a recent spur in research attempting to characterize properties of their loss surface that may be responsible for such success. In particular, several authors have noted that over-parametrization appears to act as a remedy against non-convexity. In this paper, we address this phenomenon by studying key topological properties of the loss, such as the presence or absence of "spurious valleys", defined as connected components of sub-level sets that do not include a global minimum. Focusing on a class of two-layer neural networks defined by smooth (but generally non-linear) activation functions, our main contribution is to prove that as soon as the hidden layer size matches the intrinsic dimension of the reproducing space, defined as the linear functional space generated by the activations, no spurious valleys exist, thus allowing the existence of descent directions. Our setup includes smooth activations such as polynomials, both in the empirical and population risk, and generic activations in the empirical risk case.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset