NGC | Catalog
Welcome Guest
CatalogResourcesFine-tuning Flowtron Model

Fine-tuning Flowtron Model

For downloads and more information, please view on a desktop device.
Logo for Fine-tuning Flowtron Model

Description

Fine-tuning and inference the Flowtron model which is an auto-regressive flow-based generative network for text to speech synthesis with control over speech variation and style transfer.

Publisher

NVIDIA

Use Case

Other

Framework

Other

Latest Version

2

Modified

August 17, 2021

Compressed Size

50.25 MB

Fine-tuning and Inferencing the Flowtron Model

Flowtron is an autoregressive flow-based generative network for text-to-speech synthesis with control over speech variation and style transfer. Flowtron borrows insights from Autoregressive Flows and revamps Tacotron in order to provide high-quality and expressive mel-spectrogram synthesis. Flowtron is optimized by maximizing the likelihood of the training data, which makes training simple and stable. Flowtron learns an invertible mapping of data to a latent space that can be manipulated to control many aspects of speech synthesis (pitch, tone, speech rate, cadence, accent). Our mean opinion scores (MOS) show that Flowtron matches state-of-the-art TTS models in terms of speech quality. In addition, we provide results on control of speech variation, interpolation between samples and style transfer between speakers seen and unseen during training. In quick start guide you can see the steps of fine-tuning and inferencing this model.

Pre-requisites:

  • Nvidia GPU

  • Cuda and cuDNN