WaveNODE: A Continuous Normalizing Flow for Speech Synthesis

06/08/2020
by   Hyeongju Kim, et al.
0

In recent years, various flow-based generative models have been proposed to generate high-fidelity waveforms in real-time. However, these models require either a well-trained teacher network or a number of flow steps making them memory-inefficient. In this paper, we propose a novel generative model called WaveNODE which exploits a continuous normalizing flow for speech synthesis. Unlike the conventional models, WaveNODE places no constraint on the function used for flow operation, thus allowing the usage of more flexible and complex functions. Moreover, WaveNODE can be optimized to maximize the likelihood without requiring any teacher network or auxiliary loss terms. We experimentally show that WaveNODE achieves comparable performance with fewer parameters compared to the conventional flow-based vocoders.

READ FULL TEXT
research
12/03/2019

WaveFlow: A Compact Flow-based Model for Raw Audio

In this work, we present WaveFlow, a small-footprint generative flow for...
research
11/06/2018

FloWaveNet : A Generative Flow for Raw Audio

Most of modern text-to-speech architectures use a WaveNet vocoder for sy...
research
04/10/2021

MobileStyleGAN: A Lightweight Convolutional Neural Network for High-Fidelity Image Synthesis

In recent years, the use of Generative Adversarial Networks (GANs) has b...
research
06/23/2023

Voicebox: Text-Guided Multilingual Universal Speech Generation at Scale

Large-scale generative models such as GPT and DALL-E have revolutionized...
research
09/30/2019

Graph Residual Flow for Molecular Graph Generation

Statistical generative models for molecular graphs attract attention fro...
research
06/07/2021

Learning to Efficiently Sample from Diffusion Probabilistic Models

Denoising Diffusion Probabilistic Models (DDPMs) have emerged as a power...

Please sign up or login with your details

Forgot password? Click here to reset