The Mandarin Chinese Multispeaker FastPitch-HifiGAN model transcribes text into audio representations using two model components: FastPitch and HifiGAN. This model is ready for commercial use.
FastPitch is a mel-spectrogram generator used as the first part of a neural text-to-speech system with a neural vocoder. This model uses the International Phonetic Alphabet (IPA) for inference and training, and it can output a male voice for German.
This FastPitch model was trained to be used with a corresponding HifiGAN model available here. HifiGAN is a neural vocoder model for text-to-speech applications. It is the second part of a two-stage speech synthesis pipeline.
FastPitch paper: https://arxiv.org/abs/2006.06873
HifiGAN paper: https://arxiv.org/abs/2010.05646
Architecture Type: Transformer + Generative Adversarial Network (GAN)
Network Architecture: FastPitch + HifiGAN
FastPitch is a fully-parallel text-to-speech transformer-based model, conditioned on fundamental frequency contours. The model predicts pitch contours during inference. By altering these predictions, the generated speech can be more expressive, better match the semantic of the utterance, and be engaging to the listener. FastPitch is based on a fully-parallel Transformer architecture, with much higher real-time factor than Tacotron2 for mel-spectrogram synthesis of a typical utterance.
HifiGAN is a neural vocoder based on a generative adversarial network framework. During training, the model uses a powerful discriminator consisting of small sub-discriminators, each one focusing on specific periodic parts of a raw waveform.
For FastPitch (1st Stage): Text Strings in Mandarin Chinese
Other Properties Related to Input: 400 Character Text String Limit
For HifiGAN (2nd Stage): Audio of shape (batch x time) in wav format
Other Parameters Related to Output: Mono, Encoded 16 bit audio; 20 Second Maximum Length; Depending on input, this model can output a female or a male voice for Mandarin Chinese with two (2) emotions for the female voice and six (6) emotions for male voices. The female voice is classified as “neutral” and “calm.” The male voice is classified as “neutral,” “calm,” “happy,” and “fearful”, “sad”, and “angry.”
Runtime Engine(s): Riva 2.13.0 or greater
Supported Hardware Platform(s):
Supported Operating System(s):
FastPitch_Zh-CN-Multispeaker-1.1
** Data Collection Method by dataset
** Data Collection Method by dataset
Engine: Triton
Test Hardware:
NVIDIA believes Trustworthy AI is a shared responsibility and we have established policies and practices to enable development for a wide array of AI applications. When downloaded or used in accordance with our terms of service, developers should work with their supporting model team to ensure this model meets requirements for the relevant industry and use case and addresses unforeseen product misuse. For more detailed information on ethical considerations for this model, please see the Model Card++ Explainability, Bias, Safety & Security, and Privacy Subcards. Please report security vulnerabilities or NVIDIA AI Concerns here.
By downloading and using the models and resources packaged with Riva Conversational AI, you accept the terms of the Riva license.