On the Adversarial Robustness of Generative Autoencoders in the Latent Space

by   Mingfei Lu, et al.

The generative autoencoders, such as the variational autoencoders or the adversarial autoencoders, have achieved great success in lots of real-world applications, including image generation, and signal communication. However, little concern has been devoted to their robustness during practical deployment. Due to the probabilistic latent structure, variational autoencoders (VAEs) may confront problems such as a mismatch between the posterior distribution of the latent and real data manifold, or discontinuity in the posterior distribution of the latent. This leaves a back door for malicious attackers to collapse VAEs from the latent space, especially in scenarios where the encoder and decoder are used separately, such as communication and compressed sensing. In this work, we provide the first study on the adversarial robustness of generative autoencoders in the latent space. Specifically, we empirically demonstrate the latent vulnerability of popular generative autoencoders through attacks in the latent space. We also evaluate the difference between variational autoencoders and their deterministic variants and observe that the latter performs better in latent robustness. Meanwhile, we identify a potential trade-off between the adversarial robustness and the degree of the disentanglement of the latent codes. Additionally, we also verify the feasibility of improvement for the latent robustness of VAEs through adversarial training. In summary, we suggest concerning the adversarial latent robustness of the generative autoencoders, analyze several robustness-relative issues, and give some insights into a series of key challenges.


page 8

page 9

page 12


Analyzing the Posterior Collapse in Hierarchical Variational Autoencoders

Hierarchical Variational Autoencoders (VAEs) are among the most popular ...

Sparsity in Variational Autoencoders

Working in high-dimensional latent spaces, the internal encoding of data...

Shortcut Detection with Variational Autoencoders

For real-world applications of machine learning (ML), it is essential th...

Perturbation theory approach to study the latent space degeneracy of Variational Autoencoders

The use of Variational Autoencoders in different Machine Learning tasks ...

Generative Autoencoders as Watermark Attackers: Analyses of Vulnerabilities and Threats

Invisible watermarks safeguard images' copyrights by embedding hidden me...

Learning to Drop Out: An Adversarial Approach to Training Sequence VAEs

In principle, applying variational autoencoders (VAEs) to sequential dat...

Supervising the Decoder of Variational Autoencoders to Improve Scientific Utility

Probabilistic generative models are attractive for scientific modeling b...

Please sign up or login with your details

Forgot password? Click here to reset