Provable Guarantees for Nonlinear Feature Learning in Three-Layer Neural Networks

by   Eshaan Nichani, et al.

One of the central questions in the theory of deep learning is to understand how neural networks learn hierarchical features. The ability of deep networks to extract salient features is crucial to both their outstanding generalization ability and the modern deep learning paradigm of pretraining and finetuneing. However, this feature learning process remains poorly understood from a theoretical perspective, with existing analyses largely restricted to two-layer networks. In this work we show that three-layer neural networks have provably richer feature learning capabilities than two-layer networks. We analyze the features learned by a three-layer network trained with layer-wise gradient descent, and present a general purpose theorem which upper bounds the sample complexity and width needed to achieve low test error when the target has specific hierarchical structure. We instantiate our framework in specific statistical learning settings – single-index models and functions of quadratic features – and show that in the latter setting three-layer networks obtain a sample complexity improvement over all existing guarantees for two-layer networks. Crucially, this sample complexity improvement relies on the ability of three-layer networks to efficiently learn nonlinear features. We then establish a concrete optimization-based depth separation by constructing a function which is efficiently learnable via gradient descent on a three-layer network, yet cannot be learned efficiently by a two-layer network. Our work makes progress towards understanding the provable benefit of three-layer neural networks over two-layer networks in the feature learning regime.


page 1

page 2

page 3

page 4


Identifying good directions to escape the NTK regime and efficiently learn low-degree plus sparse polynomials

A recent goal in the theory of deep learning is to identify how neural n...

What can a Single Attention Layer Learn? A Study Through the Random Features Lens

Attention layers – which map a sequence of inputs to a sequence of outpu...

A Theoretical Analysis on Feature Learning in Neural Networks: Emergence from Inputs and Advantage over Fixed Features

An important characteristic of neural networks is their ability to learn...

Learning Two-Layer Neural Networks, One (Giant) Step at a Time

We study the training dynamics of shallow neural networks, investigating...

An Adaptive Tangent Feature Perspective of Neural Networks

In order to better understand feature learning in neural networks, we pr...

Learning Single-Index Models with Shallow Neural Networks

Single-index models are a class of functions given by an unknown univari...

Do Compressed Representations Generalize Better?

One of the most studied problems in machine learning is finding reasonab...

Please sign up or login with your details

Forgot password? Click here to reset