Speech Synthesis: FastPitch 1.1 Model Card
FastPitch is a mel-spectrogram generator, designed to be used as the first part of a neural text-to-speech system in conjunction with a neural vocoder. This model uses the International Phonetic Alphabet (IPA) for inference and training.
FastPitch is a fully-parallel text-to-speech model based on FastSpeech, conditioned on fundamental frequency contours. The model predicts pitch contours during inference. By altering these predictions, the generated speech can be more expressive, better match the semantic of the utterance, and in the end more engaging to the listener. FastPitch is based on a fully-parallel Transformer architecture, with much higher real-time factor than Tacotron2 for mel-spectrogram synthesis of a typical utterance.
This model is trained on proprietary data sampled at 44100Hz, and can be used to generate a Spanish (US) voice. This model supports 1 female and 1 male voice. The female voice comes with neutral, calm, angry and sad emotions. The male voice comes with neutral, calm, happy, and angry emotions. Each emotion is accessed as a speaker. For example Female-Calm, Male-Happy, and so on.
How to Use this Model
FastPitch is intended to be used as the first part of a two stage speech synthesis pipeline. FastPitch takes text and produces a mel-spectrogram. The second stage takes the generated mel-spectrogram and returns audio.
The encryption key for this model is tlt_encode
Spanish text strings
Mel-spectrogram of shape (batch x mel_channels x time)
Refer to the Riva documentation for more information.
By downloading and using the models and resources packaged with Riva Conversational AI, you accept the terms of the Riva license.
NVIDIA’s platforms and application frameworks enable developers to build a wide array of AI applications. Consider potential algorithmic bias when choosing or creating the models being deployed. Work with the model’s developer to ensure that it meets the requirements for the relevant industry and use case; that the necessary instruction and documentation are provided to understand error rates, confidence intervals, and results; and that the model is being used under the conditions and in the manner intended.