Convergence of gradient based pre-training in Denoising autoencoders

02/12/2015
by   Vamsi K. Ithapu, et al.
0

The success of deep architectures is at least in part attributed to the layer-by-layer unsupervised pre-training that initializes the network. Various papers have reported extensive empirical analysis focusing on the design and implementation of good pre-training procedures. However, an understanding pertaining to the consistency of parameter estimates, the convergence of learning procedures and the sample size estimates is still unavailable in the literature. In this work, we study pre-training in classical and distributed denoising autoencoders with these goals in mind. We show that the gradient converges at the rate of 1/√(N) and has a sub-linear dependence on the size of the autoencoder network. In a distributed setting where disjoint sections of the whole network are pre-trained synchronously, we show that the convergence improves by at least τ^3/4, where τ corresponds to the size of the sections. We provide a broad set of experiments to empirically evaluate the suggested behavior.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset