NGC Catalog
CLASSIC
Welcome Guest
Models
RIVA EnglishUS Energy Hifigan

RIVA EnglishUS Energy Hifigan

For downloads and more information, please view on a desktop device.
Logo for RIVA EnglishUS Energy Hifigan
Description
Hifigan model finetuned for energy conditioned multispeaker ipa fastpitch.
Publisher
NVIDIA
Latest Version
deployable_v1.1
Modified
June 28, 2024
Size
53.2 MB

Speech Synthesis: English-US Multispeaker - HifiGAN Model Overview

Description:

The English-US Multispeaker FastPitch-HifiGAN model transcribes text into audio representations using two model components: FastPitch and HifiGAN. This model is ready for commercial use.

HifiGAN is a neural vocoder model for text-to-speech applications. It is the second part of a two-stage speech synthesis pipeline.

This HifiGAN model was trained to be used with a corresponding FastPitch model available here. FastPitch is a mel-spectrogram generator used as the first part of a neural text-to-speech system with a neural vocoder. This model uses the International Phonetic Alphabet (IPA) for inference and training, and it can output a female or a male voice for US English.

References:

FastPitch paper: https://arxiv.org/abs/2006.06873

HifiGAN paper: https://arxiv.org/abs/2010.05646

Model Architecture:

Architecture Type: Transformer + Generative Adversarial Network (GAN)

Network Architecture: FastPitch + HifiGAN

FastPitch is a fully-parallel text-to-speech transformer-based model, conditioned on fundamental frequency contours. The model predicts pitch contours during inference. By altering these predictions, the generated speech can be more expressive, better match the semantic of the utterance, and be engaging to the listener. FastPitch is based on a fully-parallel Transformer architecture, with much higher real-time factor than Tacotron2 for mel-spectrogram synthesis of a typical utterance.

HifiGAN is a neural vocoder based on a generative adversarial network framework. During training, the model uses a powerful discriminator consisting of small sub-discriminators, each one focusing on specific periodic parts of a raw waveform.

Input:

For FastPitch (1st Stage): Text Strings in English

Other Properties Related to Input: 400 Character Text String Limit

Output:

For HifiGAN (2nd Stage): Audio of shape (batch x time) in wav format

Other Parameters Related to Output: Mono, Encoded 16 bit audio; 20 Second Maximum Length; Depending on input, this model can output a female or a male voice for American English with six (6) emotions for the female voice and four (4) emotions for male voices. The female voice is classified as “neutral, calm, happy, angry, fearful, and sad.” The male voice is classified as “neutral,” “calm,” “happy,” and “angry.”

Software Integration:

Runtime Engine(s): Riva 2.13.0 or greater

Supported Hardware Platform(s):

  • NVIDIA Volta V100
  • NVIDIA Turing T4
  • NVIDIA A100 GPU
  • NVIDIA A30 GPU
  • NVIDIA A10 GPU
  • NVIDIA H100 GPU
  • NVIDIA L4 GPU
  • NVIDIA L40 GPU
  • NVIDIA Jetson Orin
  • NVIDIA Jetson AGX Xavier
  • NVIDIA Jetson NX Xavier

Supported Operating System(s):

  • Linux
  • Linux 4 Tegra

Model Version(s):

HifiGAN_44k_EnglishUS_Emotion_Energy_IPA

Training & Evaluation:

Training Dataset:

** Data Collection Method by dataset

  • Human
    Properties (Quantity, Dataset Descriptions, Sensor(s)): This model is trained on a proprietary dataset of audio-text pairs sampled at 44100 Hz, which contains one Female and one Male voice speaking US English. While both genders are trained for all emotions, this dataset only releases those that passed the evaluation standard for expressiveness and quality. The dataset also contains a subset of sentences with different words emphasized.

Evaluation Dataset:

** Data Collection Method by dataset

  • Human
    Properties (Quantity, Dataset Descriptions, Sensor(s)): This model is trained on a proprietary dataset sampled at 44100 Hz, which contains one Female and one Male voice speaking US English. While both genders are trained for all emotions, this dataset only releases those that passed the evaluation standard for expressiveness and quality. The dataset also contains a subset of sentences with different words emphasized.

Inference:

Engine: Triton
Test Hardware:

  • NVIDIA Volta V100
  • NVIDIA Turing T4
  • NVIDIA A100 GPU
  • NVIDIA A30 GPU
  • NVIDIA A10 GPU
  • NVIDIA H100 GPU
  • NVIDIA L4 GPU
  • NVIDIA L40 GPU
  • NVIDIA Jetson Orin
  • NVIDIA Jetson AGX Xavier
  • NVIDIA Jetson NX Xavier

Ethical Considerations:

NVIDIA believes Trustworthy AI is a shared responsibility and we have established policies and practices to enable development for a wide array of AI applications. When downloaded or used in accordance with our terms of service, developers should work with their supporting model team to ensure this model meets requirements for the relevant industry and use case and addresses unforeseen product misuse. For more detailed information on ethical considerations for this model, please see the Model Card++ Explainability, Bias, Safety & Security, and Privacy Subcards. Please report security vulnerabilities or NVIDIA AI Concerns here.

License

By downloading and using the models and resources packaged with Riva Conversational AI, you accept the terms of the Riva license.