Using generative modelling to produce varied intonation for speech synthesis

by   Zack Hodari, et al.

Unlike human speakers, typical text-to-speech (TTS) systems are unable to produce multiple distinct renditions of a given sentence. This has previously been addressed by adding explicit external control. In contrast, generative models are able to capture a distribution over multiple renditions and thus produce varied renditions using sampling. Typical neural TTS models learn the average of the data because they minimise mean squared error. In the context of prosody, taking the average produces flatter, more boring speech: an "average prosody". A generative model that can synthesise multiple prosodies will, by design, not model average prosody. We use variational autoencoders (VAE) which explicitly place the most "average" data close to the mean of the Gaussian prior. We propose that by moving towards the tails of the prior distribution, the model will transition towards generating more idiosyncratic, varied renditions. Focusing here on intonation, we investigate the trade-off between naturalness and intonation variation and find that typical acoustic models can either be natural, or varied, but not both. However, sampling from the tails of the VAE prior produces much more varied intonation than the traditional approaches, whilst maintaining the same level of naturalness.


page 1

page 2

page 3

page 4


A learned conditional prior for the VAE acoustic space of a TTS system

Many factors influence speech yielding different renditions of a given s...

Speech Synthesis and Control Using Differentiable DSP

Modern text-to-speech systems are able to produce natural and high-quali...

Training Discriminative Models to Evaluate Generative Ones

Generative models are known to be difficult to assess. Recent works, esp...

NeRF-VAE: A Geometry Aware 3D Scene Generative Model

We propose NeRF-VAE, a 3D scene generative model that incorporates geome...

Ctrl-P: Temporal Control of Prosodic Variation for Speech Synthesis

Text does not fully specify the spoken form, so text-to-speech models mu...

Discrete acoustic space for an efficient sampling in neural text-to-speech

We present an SVQ-VAE architecture using a split vector quantizer for NT...

Towards Language Modelling in the Speech Domain Using Sub-word Linguistic Units

Language models (LMs) for text data have been studied extensively for th...

Please sign up or login with your details

Forgot password? Click here to reset