A Gaussian Process perspective on Convolutional Neural Networks
In this paper we cast the well-known convolutional neural network in a Gaussian process perspective. In this way we hope to gain additional insights into the performance of convolutional networks, in particular understand under what circumstances they tend to perform well and what assumptions are implicitly made in the network. While for feedforward networks the properties of convergence to Gaussian processes have been studied extensively, little is known about situations in which the output from a convolutional network approaches a multivariate normal distribution. In the convolutional net the sum is computed over variables which are not necessarily identically distributed, rendering the general central limit theorem useless. Nevertheless we can apply a Lyapunov-type bound on the distance between the Gaussian process and convolutional network output, and use this bound to study the properties under which the convolutional network behaves approximately like a Gaussian process, so that this behavior -depending on the application- can be either obtained or avoided.
READ FULL TEXT