NGC | Catalog
Welcome Guest
CatalogModelsTTS Gu Female Tacotron2

TTS Gu Female Tacotron2

For downloads and more information, please view on a desktop device.
Logo for TTS Gu Female Tacotron2


Tacotron2 Speech Synthesis model trained on Female Gujarati Speech trained on IndicTTS Dataset.



Use Case

Speech Synthesis



Latest Version



May 11, 2022


107.77 MB

Model Overview

This collection contains Tacotron2 Text to Speech Model for Gujarati language with Female Voice trained on IndicTTS dataset. This model is a mel-spectrogram generator and can be used along with HifiGAN as the vocoder to produce speech.

Model Training Details

Tacotron2 is an encoder-attention-decoder. The encoder is made of three parts in sequence: 1) a word embedding, 2) a convolutional network, and 3) a bi-directional LSTM. The encoded represented is connected to the decoder via a Location Sensitive Attention module. The decoder is comprised of a 2 layer LSTM network, a convolutional postnet, and a fully connected prenet.

During training, the ground frame is fed through the prenet and passed as input to the decoder LSTM layers. During inference, the model's predictions at the previous time step is used. In addition, an attention context is computed by the attention layer at each step and concatenated with the prenet output. The output of the LSTM network concatenated with the attention is sent through two projection layers. The first projects the information to a spectrogram while the other projects it to a stop token. The spectrogram is then sent through the convolutional postnet to compute a residual to add to the generated spectrogram.

Trained or fine-tuned NeMo models (with the file extenstion .nemo) can be converted to Riva models and then deployed. Here is a pre-trained Tacotron2 Speech Synthesis Riva model. Note that the Tacotron2 model at that link is not contained in a .riva file. Rather, it is used directly in the Riva build phase as a .nemo file.

Dataset Details

The model is trained using the IndicTTS dataset which is provided by consortrium of the following institutes. IIIT Hyderabad IIT Kharagpur IISc, Bangalore CDAC, Mumbai CDAC, Thiruvananthapuram IIT, Guwahati CDAC, Kolkata SSNCE, Chennai DA-IICT, Gujarat IIT, Mandi PESIT, Bangalore


No performance information available at this time.

How to Use this Model

This model can be automatically loaded from NGC.

NOTE: In order to generate audio, you also need a 22050Hz vocoder from NeMo. This example uses the HiFi-GAN model.

# Load Tacotron2 and HifiGAN
import IPython.display as ipd
from nemo.collections.tts.models import HifiGanModel
from nemo.collections.tts.models import Tacotron2Model

spec_generator = Tacotron2Model.from_pretrained("tts_gu_female_tacotron2")

vocoder = HifiGanModel.from_pretrained(model_name="tts_hifigan")

# Generate audio
parsed = spec_generator.parse("રિવા એ સ્પીચ રેકગ્નિશન નેચરલ લેંગ્વેજ પ્રોસેસિંગ અને સ્પીચ સિન્થેસિસ છે")
spectrogram = spec_generator.generate_spectrogram(tokens=parsed)
audio = vocoder.convert_spectrogram_to_audio(spec=spectrogram)
ipd.Audio(audio[0].cpu().detach().numpy(), rate=22050)


This model accepts batches of text.


This model generates mel spectrograms.


This checkpoint only works well with vocoders that were trained on 22050Hz data. Otherwise, the generated audio may be scratchy or choppy-sounding.


Tacotron2 paper: Location Sensitive Attention paper:


License to use this model is covered by the NGC TERMS OF USE unless another License/Terms Of Use/EULA is clearly specified. By downloading the public and release version of the model, you accept the terms and conditions of the NGC TERMS OF USE.