NGC Catalog
CLASSIC
Welcome Guest
Models
TTS En Multispeaker FastPitch HiFiGAN

TTS En Multispeaker FastPitch HiFiGAN

For downloads and more information, please view on a desktop device.
Logo for TTS En Multispeaker FastPitch HiFiGAN
Description
This collection contains two models: 1) Multi-speaker FastPitch (around 50M parameters) trained on HiFiTTS with over 291.6 hours of english speech and 10 speakers. 2) HiFiGAN trained on mel spectrograms produced by the Multi-speaker FastPitch in (1).
Publisher
NVIDIA
Latest Version
1.10.0
Modified
January 24, 2024
Size
521.09 MB

Model Overview

The English-US Multispeaker FastPitch-HifiGAN model transcribes text into audio representations using two model components: FastPitch and HifiGAN. This model is ready for commercial use.

This collection contains two models:

  1. Multi-speaker FastPitch is a mel-spectrogram generator used as the first part of a neural text-to-speech system with a neural vocoder. This model's weights are conditioned to the energy of input audios and it uses the International Phonetic Alphabet (IPA) for inference and training.

  2. HiFiGAN is a neural vocoder model for text-to-speech applications. It is trained on mel spectrograms produced by the Multi-speaker FastPitch in (1). It is the second part of a two-stage speech synthesis pipeline.

Model Architecture

FastPitch [1] is a fully-parallel text-to-speech model based on FastSpeech, conditioned on fundamental frequency contours. The model predicts pitch contours during inference. By altering these predictions, the generated speech can be more expressive, better match the semantic of the utterance, and in the end more engaging to the listener. FastPitch is based on a fully-parallel Transformer architecture, with much higher real-time factor than Tacotron2 for mel-spectrogram synthesis of a typical utterance. Additionally it uses unsupervised speech-text aligner [2].

HiFiGAN [3], a generative adversarial network (GAN) model that generates audio from mel spectrograms produced by the Multi-speaker FastPitch in (1). The generator uses transposed convolutions to upsample mel spectrograms to audio. During training, the model uses a powerful discriminator consisting of small sub-discriminators, each one focusing on specific periodic parts of a raw waveform.

How to Use this Model

The model is available for use in the NeMo toolkit [4], and can be used as a pre-trained checkpoint for inference or for fine-tuning on another dataset.

In order to generate spectrogram specific to a particular speaker you will need to provide speaker ID to FastPitch. The speaker IDs go from 1 to 20. For security purposes the model generates spectrograms for synthetic speakers generated by interpolating original HiFiTTS speakers.

NOTE: For best results you should use the vocoder (HiFiGAN) checkpoint in this model card along with the mel spectrogram generator (FastPitch) checkpoint.

Automatically load the model from NGC

# Load spectrogram generator
from nemo.collections.tts.models import FastPitchModel
spec_generator = FastPitchModel.from_pretrained("tts_en_fastpitch_multispeaker")

# Load Vocoder
from nemo.collections.tts.models import HifiGanModel
model = HifiGanModel.from_pretrained(model_name="tts_en_hifitts_hifigan_ft_fastpitch")

# Generate audio
import soundfile as sf
parsed = spec_generator.parse("You can type your sentence here to get nemo to produce speech.")
speaker_id = 10
spectrogram = spec_generator.generate_spectrogram(tokens=parsed, speaker=10)
audio = model.convert_spectrogram_to_audio(spec=spectrogram)

# Save the audio to disk in a file called speech.wav
sf.write("speech.wav", audio.to('cpu').numpy(), 44100)

References

[1] Fastpitch: https://arxiv.org/abs/2006.06873

[2] One TTS Alignment To Rule Them All: https://arxiv.org/abs/2108.10447

[3] HiFiGan paper: https://arxiv.org/abs/2010.05646

[4] NVIDIA NeMo Toolkit

Input: (Enter "None" As Needed)

For FastPitch (1st Stage): Text Strings in English

Other Properties Related to Input: [400 Character Text String Limit]

Output: (Enter "None" As Needed)

For HifiGAN (2nd Stage): Audio of shape (batch x time) in wav format

Output Parameters Related to Output: Mono, Encoded 16 bit audio; 20 Second Maximum Length; Depending on input, this model can output a female or a male voice for American English with six (6) emotions for the female voice and four (4) emotions for male voices. The female voice is classified as “neutral, calm, happy, angry, fearful, and sad.” The male voice is classified as “neutral,” “calm,” “happy,” and “angry.”

Other Properties Related to Output: [20 Second Maximum Length]

Software Integration:

Runtime Engine(s): [Riva 2.13.0]

Supported Hardware Platform(s):

  • NVIDIA Volta V100
  • NVIDIA Turing T4
  • NVIDIA A100 GPU
  • NVIDIA A30 GPU
  • NVIDIA A10 GPU
  • NVIDIA H100 GPU
  • NVIDIA L4 GPU
  • NVIDIA L40 GPU
  • NVIDIA Jetson Orin
  • NVIDIA Jetson AGX Xavier
  • NVIDIA Jetson NX Xavier

Supported Operating System(s):

  • [Linux]
  • [Linux 4 Tegra]

Model Version(s):

tts-FastPitch_44k_EnglishUS_IPA_v1.10.0

Training & Evaluation:

Training Dataset:

** Data Collection Method by dataset

  • [Human]

Properties (Quantity, Dataset Descriptions, Sensor(s)): This model is trained on a proprietary dataset of audio-text pairs sampled at 44100 Hz, which contains one Female and one Male voice speaking US English. While both genders are trained for all emotions, this dataset only releases those that passed the evaluation standard for expressiveness and quality. The dataset also contains a subset of sentences with different words emphasized.

Evaluation Dataset:

** Data Collection Method by dataset

  • [Human]

Properties (Quantity, Dataset Descriptions, Sensor(s)): This model is trained on a proprietary dataset sampled at 44100 Hz, which contains one Female and one Male voice speaking US English. While both genders are trained for all emotions, this dataset only releases those that passed the evaluation standard for expressiveness and quality. The dataset also contains a subset of sentences with different words emphasized.

Inference:

Engine: Triton
Test Hardware:

  • NVIDIA Volta V100
  • NVIDIA Turing T4
  • NVIDIA A100 GPU
  • NVIDIA A30 GPU
  • NVIDIA A10 GPU
  • NVIDIA H100 GPU
  • NVIDIA L4 GPU
  • NVIDIA L40 GPU
  • NVIDIA Jetson Orin
  • NVIDIA Jetson AGX Xavier
  • NVIDIA Jetson NX Xavier

Ethical Considerations:

NVIDIA believes Trustworthy AI is a shared responsibility and we have established policies and practices to enable development for a wide array of AI applications. When downloaded or used in accordance with our terms of service, developers should work with their supporting model team to ensure this model meets requirements for the relevant industry and use case and addresses unforeseen product misuse. For more detailed information on ethical considerations for this model, please see the Model Card++ Explainability, Bias, Safety & Security, and Privacy Subcard. Please report security vulnerabilities or NVIDIA AI Concerns here.