NGC | Catalog
CatalogModelsTTS En Tacotron2

TTS En Tacotron2

Logo for TTS En Tacotron2
Description
Tacotron2 Speech Synthesis model trained on female English speech
Publisher
NVIDIA
Latest Version
1.10.0
Modified
April 4, 2023
Size
111.27 MB

Model Overview

Tacotron2 is an encoder-attention-decoder. The encoder is made of three parts in sequence: 1) a word embedding, 2) a convolutional network, and 3) a bi-directional LSTM. The encoded represented is connected to the decoder via a Location Sensitive Attention module. The decoder is comprised of a 2 layer LSTM network, a convolutional postnet, and a fully connected prenet.

During training, the ground frame is fed through the prenet and passed as input to the decoder LSTM layers. During inference, the model's predictions at the previous time step is used. In addition, an attention context is computed by the attention layer at each step and concatenated with the prenet output. The output of the LSTM network concatenated with the attention is sent through two projection layers. The first projects the information to a spectrogram while the other projects it to a stop token. The spectrogram is then sent through the convolutional postnet to compute a residual to add to the generated spectrogram.

Trained or fine-tuned NeMo models (with the file extenstion .nemo) can be converted to Riva models and then deployed. Here is a pre-trained Tacotron2 Speech Synthesis Riva model. Note that the Tacotron2 model at that link is not contained in a .riva file. Rather, it is used directly in the Riva build phase as a .nemo file.

Training

This model is trained on LJSpeech sampled at 22050Hz, and has been tested on generating female English voices with an American accent.

Performance

No performance information available at this time.

How to Use this Model

You can download NeMo docker container that includes NeMo release 1.10.0, and run inference on that docker container,

$ docker pull nvcr.io/nvidia/nemo:22.05
$ docker run --gpus all -it --rm --shm-size=8g -p 8080:8080 -p 6006:6006 --ulimit memlock=-1 --ulimit stack=67108864 nvcr.io/nvidia/nemo:22.05 bash

This model can be automatically loaded from NGC.

NOTE: In order to generate audio, you also need a 22050Hz vocoder from NeMo. This example uses the HiFi-GAN model.

# Load Tacotron2
from nemo.collections.tts.models import Tacotron2Model
spec_generator = Tacotron2Model.from_pretrained("tts_en_tacotron2")

# Load vocoder
from nemo.collections.tts.models import HifiGanModel
vocoder = HifiGanModel.from_pretrained(model_name="tts_hifigan")

# Generate audio
import soundfile as sf
import torch
with torch.no_grad():
    parsed = spec_generator.parse("You can type your sentence here to get nemo to produce speech.")
    spectrogram = spec_generator.generate_spectrogram(tokens=parsed)
    audio = vocoder.convert_spectrogram_to_audio(spec=spectrogram)

# Save the audio to disk in a file called speech.wav
if isinstance(audio, torch.Tensor):
    audio = audio.to('cpu').numpy()
sf.write("speech.wav", audio.T, 22050, format="WAV")

Input

This model accepts batches of text.

Output

This model generates mel spectrograms.

Limitations

This checkpoint only works well with vocoders that were trained on 22050Hz data. Otherwise, the generated audio may be scratchy or choppy-sounding.

Versions

1.10.0 (current): Refactored Tacotron2 to use new TTSDataset class, and made minor bugfixes for file paths.

1.0.0: An updated version of tacotron2 that standardizes mel spectrogram generation across NeMo models.

1.0.0rc1: The original version that was released with NeMo 1.0.0.rc1

References

Tacotron2 paper: https://arxiv.org/abs/1712.05884 Location Sensitive Attention paper: https://arxiv.org/abs/1506.07503

Licence

License to use this model is covered by the NGC TERMS OF USE unless another License/Terms Of Use/EULA is clearly specified. By downloading the public and release version of the model, you accept the terms and conditions of the NGC TERMS OF USE.