NGC | Catalog
CatalogModelsTTS En FastPitch

TTS En FastPitch

Logo for TTS En FastPitch
Description
FastPitch Speech Synthesis model trained on female English speech.
Publisher
NVIDIA
Latest Version
IPA_1.13.0
Modified
April 4, 2023
Size
177.73 MB

Model Overview

FastPitch is a fully-parallel transformer architecture with prosody control over pitch and individual phoneme duration.

Trained or fine-tuned NeMo models (with the file extenstion .nemo) can be converted to Riva models (with the file extension .riva) and then deployed. Here is a pre-trained Riva FastPitch text-to-speech (TTS) model.

Model Architecture

FastPitch is a fully-parallel text-to-speech model based on FastSpeech, conditioned on fundamental frequency contours. The model predicts pitch contours during inference. By altering these predictions, the generated speech can be more expressive, better match the semantic of the utterance, and in the end more engaging to the listener. FastPitch is based on a fully-parallel Transformer architecture, with much higher real-time factor than Tacotron2 for mel-spectrogram synthesis of a typical utterance.

For more details, please see Model Architecture or refer to the paper.

Training

Dataset

This model is trained on LJSpeech sampled at 22050Hz, and has been tested on generating female English voices with an American accent.

Performance

No performance information available at this time.

How to Use this Model

This model can be automatically loaded from NGC. NOTE: In order to generate audio, you also need a 22050Hz vocoder from NeMo. This example uses the HiFi-GAN model.

# Load FastPitch
from nemo.collections.tts.models import FastPitchModel
spec_generator = FastPitchModel.from_pretrained("tts_en_fastpitch")

# Load vocoder
from nemo.collections.tts.models import Vocoder
model = Vocoder.from_pretrained(model_name="tts_hifigan")

# Generate audio
import soundfile as sf
parsed = spec_generator.parse("You can type your sentence here to get nemo to produce speech.")
spectrogram = spec_generator.generate_spectrogram(tokens=parsed)
audio = model.convert_spectrogram_to_audio(spec=spectrogram)

# Save the audio to disk in a file called speech.wav
sf.write("speech.wav", audio.to('cpu').numpy(), 22050)

Input

This model accepts batches of text.

Output

This model generates mel spectrograms.

Limitations

This checkpoint only works well with vocoders that were trained on 22050Hz data. Otherwise, the generated audio may be scratchy or choppy-sounding.

Versions

IPA_1.13.0: A version of FastPitch trained with IPA rather than ARPABET.

1.4.0 (current): An updated version trained using the new fastpitch_align.yaml file.

1.0.0: The original version released with NeMo 1.0.0.

References

Fastpitch paper: https://arxiv.org/abs/2006.06873

Licence

License to use this model is covered by the NGC TERMS OF USE unless another License/Terms Of Use/EULA is clearly specified. By downloading the public and release version of the model, you accept the terms and conditions of the NGC TERMS OF USE.