Inductive Bias of Multi-Channel Linear Convolutional Networks with Bounded Weight Norm

by   Meena Jagadeesan, et al.

We study the function space characterization of the inductive bias resulting from controlling the ℓ_2 norm of the weights in linear convolutional networks. We view this in terms of an induced regularizer in the function space given by the minimum norm of weights required to realize a linear function. For two layer linear convolutional networks with C output channels and kernel size K, we show the following: (a) If the inputs to the network have a single channel, the induced regularizer for any K is a norm given by a semidefinite program (SDP) that is independent of the number of output channels C. We further validate these results through a binary classification task on MNIST. (b) In contrast, for networks with multi-channel inputs, multiple output channels can be necessary to merely realize all matrix-valued linear functions and thus the inductive bias does depend on C. Further, for sufficiently large C, the induced regularizer for K=1 and K=D are the nuclear norm and the ℓ_2,1 group-sparse norm, respectively, of the Fourier coefficients – both of which promote sparse structures.


page 11

page 12

page 35

page 36


On Dropout and Nuclear Norm Regularization

We give a formal and complete characterization of the explicit regulariz...

Intrinsic dimensionality and generalization properties of the ℛ-norm inductive bias

We study the structural and statistical properties of ℛ-norm minimizing ...

How do infinite width bounded norm networks look in function space?

We consider the question of what functions can be captured by ReLU netwo...

Additive function approximation in the brain

Many biological learning systems such as the mushroom body, hippocampus,...

A Function Space View of Bounded Norm Infinite Width ReLU Nets: The Multivariate Case

A key element of understanding the efficacy of overparameterized neural ...

Implicit Bias of Linear Equivariant Networks

Group equivariant convolutional neural networks (G-CNNs) are generalizat...

Please sign up or login with your details

Forgot password? Click here to reset