Reconstruction of Hidden Representation for Robust Feature Extraction
This paper aims to develop a new and robust approach to feature representation. Motivated by the success of Auto-Encoders, we first theoretical summarize the general properties of all algorithms that are based on traditional Auto-Encoders: 1) The reconstruction error of the input or corrupted input can not be lower than a lower bound, which can be viewed as a guiding principle for reconstructing the input or corrupted input. 2) The reconstruction of a hidden representation achieving its ideal situation is the necessary condition for the reconstruction of the input to reach the ideal state. 3) Minimizing the Frobenius norm of the Jacobian matrix has a deficiency and may result in a much worse local optimum value. 4) Minimizing the reconstruction error of the hidden representation is more robust than minimizing the Frobenius norm of the Jacobian matrix. Based on the above analysis, we propose a new model termed Double Denoising Auto-Encoders (DDAEs), which uses corruption and reconstruction on both the input and the hidden representation. We demonstrate that the proposed model is highly flexible and extensible. We also show that for handling inessential features, our model is more robust than Denoising Auto-Encoders (DAEs). Comparative experiments illustrate that our model is significantly better for representation learning than the state-of-the-art models.
READ FULL TEXT