Self-Consistent Learning: Cooperation between Generators and Discriminators

03/16/2023
by   Tong Wu, et al.
1

Using generated data to improve the performance of downstream discriminative models has recently gained popularity due to the great development of pre-trained language models. In most previous studies, generative models and discriminative models are trained separately and thus could not adapt to any changes in each other. As a result, the generated samples can easily deviate from the real data distribution, while the improvement of the discriminative model quickly reaches saturation. Generative adversarial networks (GANs) train generative models via an adversarial process with discriminative models to achieve joint training. However, the training of standard GANs is notoriously unstable and often falls short of convergence. In this paper, to address these issues, we propose a self-consistent learning framework, in which a discriminator and a generator are cooperatively trained in a closed-loop form. The discriminator and the generator enhance each other during multiple rounds of alternating training until a scoring consensus is reached. This framework proves to be easy to train and free from instabilities such as mode collapse and non-convergence. Extensive experiments on sentence semantic matching demonstrate the effectiveness of the proposed framework: the discriminator achieves 10+ AP of improvement on the zero-shot setting and new state-of-the-art performance on the full-data setting.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/14/2022

Shared Loss between Generators of GANs

Generative adversarial networks are generative models that are capable o...
research
03/23/2018

Fictitious GAN: Training GANs with Historical Models

Generative adversarial networks (GANs) are powerful tools for learning g...
research
02/12/2018

Tempered Adversarial Networks

Generative adversarial networks (GANs) have been shown to produce realis...
research
11/12/2021

Deceive D: Adaptive Pseudo Augmentation for GAN Training with Limited Data

Generative adversarial networks (GANs) typically require ample data for ...
research
11/20/2018

Adversarial Feedback Loop

Thanks to their remarkable generative capabilities, GANs have gained gre...
research
05/20/2022

Revisiting GANs by Best-Response Constraint: Perspective, Methodology, and Application

In past years, the minimax type single-level optimization formulation an...
research
11/15/2022

A Universal Discriminator for Zero-Shot Generalization

Generative modeling has been the dominant approach for large-scale pretr...

Please sign up or login with your details

Forgot password? Click here to reset