How Much Over-parameterization Is Sufficient to Learn Deep ReLU Networks?

11/27/2019
by   Zixiang Chen, et al.
28

A recent line of research on deep learning focuses on the extremely over-parameterized setting, and shows that when the network width is larger than a high degree polynomial of the training sample size n and the inverse of the target accuracy ϵ^-1, deep neural networks learned by (stochastic) gradient descent enjoy nice optimization and generalization guarantees. Very recently, it is shown that under certain margin assumption on the training data, a polylogarithmic width condition suffices for two-layer ReLU networks to converge and generalize (Ji and Telgarsky, 2019). However, how much over-parameterization is sufficient to guarantee optimization and generalization for deep neural networks still remains an open question. In this work, we establish sharp optimization and generalization guarantees for deep ReLU networks. Under various assumptions made in previous work, our optimization and generalization guarantees hold with network width polylogarithmic in n and ϵ^-1. Our results push the study of over-parameterized deep neural networks towards more practical settings.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset