NGC Catalog
CLASSIC
Welcome Guest
Models
RIVA Mandarin CN Fastpitch with Emotions

RIVA Mandarin CN Fastpitch with Emotions

For downloads and more information, please view on a desktop device.
Logo for RIVA Mandarin CN Fastpitch with Emotions
Description
Riva Fastpitch IPA multisepaker model with Emotions
Publisher
NVIDIA
Latest Version
2.15.0
Modified
June 28, 2024
Size
85.85 MB

Speech Synthesis: Mandarin Chinese Multispeaker - FastPitch Model Overview

Description:

The Mandarin Chinese Multispeaker FastPitch-HifiGAN model transcribes text into audio representations using two model components: FastPitch and HifiGAN. This model is ready for commercial use.

FastPitch is a mel-spectrogram generator used as the first part of a neural text-to-speech system with a neural vocoder. This model uses the International Phonetic Alphabet (IPA) for inference and training, and it can output a male voice for German.

This FastPitch model was trained to be used with a corresponding HifiGAN model available here. HifiGAN is a neural vocoder model for text-to-speech applications. It is the second part of a two-stage speech synthesis pipeline.

References:

FastPitch paper: https://arxiv.org/abs/2006.06873

HifiGAN paper: https://arxiv.org/abs/2010.05646

Model Architecture:

Architecture Type: Transformer + Generative Adversarial Network (GAN)

Network Architecture: FastPitch + HifiGAN

FastPitch is a fully-parallel text-to-speech transformer-based model, conditioned on fundamental frequency contours. The model predicts pitch contours during inference. By altering these predictions, the generated speech can be more expressive, better match the semantic of the utterance, and be engaging to the listener. FastPitch is based on a fully-parallel Transformer architecture, with much higher real-time factor than Tacotron2 for mel-spectrogram synthesis of a typical utterance.

HifiGAN is a neural vocoder based on a generative adversarial network framework. During training, the model uses a powerful discriminator consisting of small sub-discriminators, each one focusing on specific periodic parts of a raw waveform.

Input:

For FastPitch (1st Stage): Text Strings in Mandarin Chinese

Other Properties Related to Input: 400 Character Text String Limit

Output:

For HifiGAN (2nd Stage): Audio of shape (batch x time) in wav format

Other Parameters Related to Output: Mono, Encoded 16 bit audio; 20 Second Maximum Length; Depending on input, this model can output a female or a male voice for Mandarin Chinese with two (2) emotions for the female voice and six (6) emotions for male voices. The female voice is classified as “neutral” and “calm.” The male voice is classified as “neutral,” “calm,” “happy,” and “fearful”, “sad”, and “angry.”

Software Integration:

Runtime Engine(s): Riva 2.13.0 or greater

Supported Hardware Platform(s):

  • NVIDIA A100 GPU
  • NVIDIA A30 GPU
  • NVIDIA A10 GPU
  • NVIDIA H100 GPU
  • NVIDIA Jetson Orin
  • NVIDIA Jetson AGX Xavier
  • NVIDIA Jetson NX Xavier
  • NVIDIA L4 GPU
  • NVIDIA L40 GPU
  • NVIDIA Turing T4
  • NVIDIA Volta V100

Supported Operating System(s):

  • Linux
  • Linux 4 Tegra

Model Version(s):

FastPitch_Zh-CN-Multispeaker-1.1

Training & Evaluation:

Training Dataset:

** Data Collection Method by dataset

  • Human
    Properties (Quantity, Dataset Descriptions, Sensor(s)): This model is trained on a proprietary dataset of audio-text pairs sampled at 44100 Hz, which contains one Female and one Male voice speaking Mandarin Chinese. While both genders are trained for all emotions, this dataset only releases those that passed the evaluation standard for expressiveness and quality. The dataset also contains a subset of sentences with different words emphasized.

Evaluation Dataset:

** Data Collection Method by dataset

  • Human
    Properties (Quantity, Dataset Descriptions, Sensor(s)): This model is tested on a proprietary dataset sampled at 44100 Hz, which contains one Female and one Male voice speaking Mandarin Chinese. While both genders are trained for all emotions, this dataset only releases those that passed the evaluation standard for expressiveness and quality. The dataset also contains a subset of sentences with different words emphasized.

Inference:

Engine: Triton
Test Hardware:

  • NVIDIA A100 GPU
  • NVIDIA A30 GPU
  • NVIDIA A10 GPU
  • NVIDIA H100 GPU
  • NVIDIA Jetson Orin
  • NVIDIA Jetson AGX Xavier
  • NVIDIA Jetson NX Xavier
  • NVIDIA L4 GPU
  • NVIDIA L40 GPU
  • NVIDIA Turing T4
  • NVIDIA Volta V100

Ethical Considerations:

NVIDIA believes Trustworthy AI is a shared responsibility and we have established policies and practices to enable development for a wide array of AI applications. When downloaded or used in accordance with our terms of service, developers should work with their supporting model team to ensure this model meets requirements for the relevant industry and use case and addresses unforeseen product misuse. For more detailed information on ethical considerations for this model, please see the Model Card++ Explainability, Bias, Safety & Security, and Privacy Subcards. Please report security vulnerabilities or NVIDIA AI Concerns here.

License

By downloading and using the models and resources packaged with Riva Conversational AI, you accept the terms of the Riva license.