On the Effective Number of Linear Regions in Shallow Univariate ReLU Networks: Convergence Guarantees and Implicit Bias

05/18/2022
∙
by   Itay Safran, et al.
∙
0
∙

We study the dynamics and implicit bias of gradient flow (GF) on univariate ReLU neural networks with a single hidden layer in a binary classification setting. We show that when the labels are determined by the sign of a target network with r neurons, with high probability over the initialization of the network and the sampling of the dataset, GF converges in direction (suitably defined) to a network achieving perfect training accuracy and having at most 𝒊(r) linear regions, implying a generalization bound. Our result may already hold for mild over-parameterization, where the width is 𝒊Ėƒ(r) and independent of the sample size.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset