Optimal Rate of Convergence for Deep Neural Network Classifiers under the Teacher-Student Setting

01/19/2020
by   Tianyang Hu, et al.
0

Classifiers built with neural networks handle large-scale high-dimensional data, such as facial images from computer vision, extremely well while traditional statistical methods often fail miserably. In this paper, we attempt to understand this empirical success in high dimensional classification by deriving the convergence rates of excess risk. In particular, a teacher-student framework is proposed that assumes the Bayes classifier to be expressed as ReLU neural networks. In this setup, we obtain a dimension-independent and un-improvable rate of convergence, i.e., O(n^-2/3), for classifiers trained based on either 0-1 loss or hinge loss. This rate can be further improved to O(n^-1) when data is separable. Here, n represents the sample size.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset