An Overview on Controllable Text Generation via Variational Auto-Encoders

by   Haoqin Tu, et al.

Recent advances in neural-based generative modeling have reignited the hopes of having computer systems capable of conversing with humans and able to understand natural language. The employment of deep neural architectures has been largely explored in a multitude of context and tasks to fulfill various user needs. On one hand, producing textual content that meets specific requirements is of priority for a model to seamlessly conduct conversations with different groups of people. On the other hand, latent variable models (LVM) such as variational auto-encoders (VAEs) as one of the most popular genres of generative models are designed to characterize the distributional pattern of textual data. Thus they are inherently capable of learning the integral textual features that are worth exploring for controllable pursuits. This overview gives an introduction to existing generation schemes, problems associated with text variational auto-encoders, and a review of several applications about the controllable generation that are instantiations of these general formulations,[A detailed paper list is available at <>] as well as related datasets, metrics and discussions for future researches. Hopefully, this overview will provide an overview of living questions, popular methodologies and raw thoughts for controllable language generation under the scope of variational auto-encoder.


page 1

page 2

page 3

page 4


PCAE: A Framework of Plug-in Conditional Auto-Encoder for Controllable Text Generation

Controllable text generation has taken a gigantic step forward these day...

Neural Language Generation: Formulation, Methods, and Evaluation

Recent advances in neural network-based generative modeling have reignit...

AdaVAE: Exploring Adaptive GPT-2s in Variational Auto-Encoders for Language Modeling

Variational Auto-Encoder (VAE) has become the de-facto learning paradigm...

Implicit Deep Latent Variable Models for Text Generation

Deep latent variable models (LVM) such as variational auto-encoder (VAE)...

An Auto-Encoder Matching Model for Learning Utterance-Level Semantic Dependency in Dialogue Generation

Generating semantically coherent responses is still a major challenge in...

Exploring Controllable Text Generation Techniques

Neural controllable text generation is an important area gaining attenti...

Please sign up or login with your details

Forgot password? Click here to reset