NGC | Catalog
CatalogModelsSpeech Synthesis English Tacotron2

Speech Synthesis English Tacotron2

Logo for Speech Synthesis English Tacotron2
Description
Mel-Spectrogram prediction conditioned on input text with LJSpeech voice.
Publisher
NVIDIA
Latest Version
deployable_v1.0
Modified
October 6, 2023
Size
107.6 MB

Speech Synthesis: Tacotron 2 Model Card =======================================

Model Overview --------------

Tacotron2 is a mel-spectrogram generator, designed to be used as the first part of a neural text-to-speech system in conjunction with a neural vocoder.

Model Architecture ------------------

Tacotron 2 is a LSTM-based Encoder-Attention-Decoder model that converts text to mel spectrograms. The encoder network The encoder network first embeds either characters or phonemes. The embedding is sent through a convolution stack, and then sent through a bidirectional LSTM. The decoder is an autoregressive LSTM: it generates one time slice of the mel spectrogram on each call. The decoder is connected the encoder via the attention module which tells the decoder which part of the encoded text to use to generate each slice of the spectrogram.

Training --------

Dataset

This model is trained on the LJSpeech dataset sampled at 22050 Hz, and can be used to generate female English voices with an American accent.

Performance -----------

The performance of TTS models is subjective and hard to quantify. Tacotron2 has been shown to achieve good speech quality when combined with a high quality mel-spectrogram generator such as WaveGlow or HifiGAN.

How to use this model ---------------------

Tacotron 2 is intended to be used as the first part of a two stage speech synthesis pipeline. Tacotron 2 takes text and produces a mel spectrogram. The second stage takes the generated mel spectrogram and returns audio.

Input

English text strings

Output

Mel spectrogram of shape (batch x mel_channels x time)

The provided .nemo checkpoint can be used, in junction with a WaveGlow checkpoint, to generate speech via Jarvis. To deploy a TTS service via Jarvis, please refer to the Riva documentation

Limitations -----------

Text-to-speech models do not always pronounce words appropriately. When deploying with Riva, users can specify arbitrary pronunciations for words and expansions for abbreviations.

When deployed with Riva, this Tacotron2 model has a maximum input length of 400 characters. When synthesizing longer text, it is recommended to break down the text at the paragraph/sentence level and make multiple inference requests.

References ----------

Tacotron 2 paper: https://arxiv.org/abs/1712.05884

License -------

By downloading and using the models and resources packaged with TLT Conversational AI, you would be accepting the terms of the Riva license

Ethical AI ----------

NVIDIA’s platforms and application frameworks enable developers to build a wide array of AI applications. Consider potential algorithmic bias when choosing or creating the models being deployed. Work with the model’s developer to ensure that it meets the requirements for the relevant industry and use case; that the necessary instruction and documentation are provided to understand error rates, confidence intervals, and results; and that the model is being used under the conditions and in the manner intended.