WaveGlow is a neural vocoder model for text-to-speech applications. It is intended as the second part of a two-stage speech synthesis pipeline, with a mel-spectrogram generator such as Tacotron2 as the first stage.
WaveGlow is a Glow-based (alternatively flow-based) model that generates audio conditioned on mel spectrograms. WaveGlow is a reversible neural network. It can be run in two modes: the first mode takes audio and transforms it to samples drawn from a normal distribution, and the second mode takes samples from a normal distribution and transforms it to audio. Both modes are conditioned on a mel spectrogram.
This model can be used to generate most voices in most languages without retraining. We have observed this trained WaveGlow to generate English audio and Mandarin audio.
This model is trained on the LJSpeech dataset sampled at 22050 Hz.
WaveGlow is intended to be used as the second part of a two stage speech synthesis pipeline. WaveGlow takes a mel spectrogram and returns audio.
Mel spectrogram of shape (batch x mel_channels x time)
Audio of shape (batch x time)
N/A
WaveGlow paper: https://arxiv.org/abs/1811.00002
By downloading and using the models and resources packaged with TLT Conversational AI, you would be accepting the terms of the Riva license
NVIDIA’s platforms and application frameworks enable developers to build a wide array of AI applications. Consider potential algorithmic bias when choosing or creating the models being deployed. Work with the model’s developer to ensure that it meets the requirements for the relevant industry and use case; that the necessary instruction and documentation are provided to understand error rates, confidence intervals, and results; and that the model is being used under the conditions and in the manner intended.