Residual Tangent Kernels

01/28/2020
by   Etai Littwin, et al.
0

A recent body of work has focused on the theoretical study of neural networks at the regime of large width. Specifically, it was shown that training infinitely-wide and properly scaled vanilla ReLU networks using the L2 loss is equivalent to kernel regression using the Neural Tangent Kernel, which is independent of the initialization instance, and remains constant during training. In this work, we derive the form of the limiting kernel for architectures incorporating bypass connections, namely residual networks (ResNets), as well as to densely connected networks (DenseNets). In addition, we derive finite width corrections for both cases. Our analysis reveals that deep practical residual architectures might operate much closer to the “kernel” regime than their vanilla counterparts: while in networks that do not use skip connections, convergence to the limiting kernels requires one to fix depth while increasing the layers' width, in both ResNets and DenseNets, convergence to the limiting kernel may occur for infinite deep and wide networks, provided proper initialization.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset