Why Robust Generalization in Deep Learning is Difficult: Perspective of Expressive Power

by   Binghui Li, et al.

It is well-known that modern neural networks are vulnerable to adversarial examples. To mitigate this problem, a series of robust learning algorithms have been proposed. However, although the robust training error can be near zero via some methods, all existing algorithms lead to a high robust generalization error. In this paper, we provide a theoretical understanding of this puzzling phenomenon from the perspective of expressive power for deep neural networks. Specifically, for binary classification problems with well-separated data, we show that, for ReLU networks, while mild over-parameterization is sufficient for high robust training accuracy, there exists a constant robust generalization gap unless the size of the neural network is exponential in the data dimension d. Even if the data is linear separable, which means achieving low clean generalization error is easy, we can still prove an exp(Ω(d)) lower bound for robust generalization. Moreover, we establish an improved upper bound of exp(𝒪(k)) for the network size to achieve low robust generalization error when the data lies on a manifold with intrinsic dimension k (k ≪ d). Nonetheless, we also have a lower bound that grows exponentially with respect to k – the curse of dimensionality is inevitable. By demonstrating an exponential separation between the network size for achieving low robust training and generalization error, our results reveal that the hardness of robust generalization may stem from the expressive power of practical models.


page 1

page 2

page 3

page 4


Why Clean Generalization and Robust Overfitting Both Happen in Adversarial Training

Adversarial training is a standard method to train deep neural networks ...

Expressivity of Shallow and Deep Neural Networks for Polynomial Approximation

We analyze the number of neurons that a ReLU neural network needs to app...

On Enhancing Expressive Power via Compositions of Single Fixed-Size ReLU Network

This paper studies the expressive power of deep neural networks from the...

Quantifying the generalization error in deep learning in terms of data distribution and neural network smoothness

The accuracy of deep learning, i.e., deep neural networks, can be charac...

Asymptotic Generalization Bound of Fisher's Linear Discriminant Analysis

Fisher's linear discriminant analysis (FLDA) is an important dimension r...

Improved Generalization Bound of Permutation Invariant Deep Neural Networks

We theoretically prove that a permutation invariant property of deep neu...

On the Robustness and Generalization of Deep Learning Driven Full Waveform Inversion

The data-driven approach has been demonstrated as a promising technique ...

Please sign up or login with your details

Forgot password? Click here to reset