Tensorizing Generative Adversarial Nets

10/30/2017
by   Xingwei Cao, et al.
0

Generative Adversarial Network (GAN) and its variants demonstrate state-of-the-art performance in the class of generative models. To capture higher dimensional distributions, the common learning procedure requires high computational complexity and large number of parameters. In this paper, we present a new generative adversarial framework by representing each layer as a tensor structure connected by multilinear operations, aiming to reduce the number of model parameters by a large factor while preserving the quality of generalized performance. To learn the model, we develop an efficient algorithm by alternating optimization of the mode connections. Experimental results demonstrate that our model can achieve high compression rate for model parameters up to 40 times as compared to the existing GAN.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset