Saddlepoints in Unsupervised Least Squares
This paper sheds light on the risk landscape of unsupervised least squares in the context of deep auto-encoding neural nets. We formally establish an equivalence between unsupervised least squares and principal manifolds. This link provides insight into the risk landscape of auto–encoding under the mean squared error, in particular all non-trivial critical points are saddlepoints. Finding saddlepoints is in itself difficult, overcomplete auto-encoding poses the additional challenge that the saddlepoints are degenerate. Within this context we discuss regularization of auto-encoders, in particular bottleneck, denoising and contraction auto-encoding and propose a new optimization strategy that can be framed as particular form of contractive regularization.
READ FULL TEXT