Implicit Bias of Gradient Descent on Linear Convolutional Networks

06/01/2018
by   Suriya Gunasekar, et al.
0

We show that gradient descent on full-width linear convolutional networks of depth L converges to a linear predictor related to the ℓ_2/L bridge penalty in the frequency domain. This is in contrast to linearly fully connected networks, where gradient descent converges to the hard margin linear support vector machine solution, regardless of depth.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset