Bridging adversarial samples and adversarial networks

12/20/2019
by   Faqiang Liu, et al.
10

Generative adversarial networks have achieved remarkable performance on various tasks but suffer from training instability. In this paper, we investigate this problem from the perspective of adversarial samples. We find that adversarial training on fake samples has been implemented in vanilla GAN but that on real samples does not exist, which makes adversarial training unsymmetric. Consequently, discriminator is vulnerable to adversarial perturbation and the gradient given by discriminator contains uninformative adversarial noise. Adversarial noise can not improve the fidelity of generated samples but can drastically change the prediction of discriminator, which can hinder generator from catching the pattern of real samples and cause instability in training. To this end, we further incorporate adversarial training of discriminator on real samples into vanilla GANs. This scheme can make adversarial training symmetric and make discriminator more robust. Robust discriminator can give more informative gradient with less adversarial noise, which can stabilize training and accelerate convergence. We validate the proposed method on image generation on CIFAR-10 , CelebA, and LSUN with varied network architectures. Experiments show that training is stabilized and FID scores of generated samples are improved by 10%∼ 50% relative to the baseline with additional 25% computation cost.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset