FastPitch is a mel-spectrogram generator, designed to be used as the first part of a neural text-to-speech system in conjunction with a neural vocoder. This model uses the International Phonetic Alphabet (IPA) for inference and training instead of ARPABET.
FastPitch is a fully-parallel text-to-speech model based on FastSpeech, conditioned on fundamental frequency contours. The model predicts pitch contours during inference. By altering these predictions, the generated speech can be more expressive, better match the semantic of the utterance, and in the end more engaging to the listener. FastPitch is based on a fully-parallel Transformer architecture, with much higher real-time factor than Tacotron2 for mel-spectrogram synthesis of a typical utterance.
This model is trained on a proprietary dataset sampled at 44100Hz, and can be used to generate English voices with an American accent. This model supports 1 male voice and 1 female voice.
FastPitch is intended to be used as the first part of a two stage speech synthesis pipeline. FastPitch takes text and produces a mel spectrogram. The second stage takes the generated mel spectrogram and returns audio.
English text strings
Mel spectrogram of shape (batch x mel_channels x time)
FastPitch paper: https://arxiv.org/abs/2006.06873
By downloading and using the models and resources packaged with Riva Conversational AI, you would be accepting the terms of the Riva license
NVIDIA’s platforms and application frameworks enable developers to build a wide array of AI applications. Consider potential algorithmic bias when choosing or creating the models being deployed. Work with the model’s developer to ensure that it meets the requirements for the relevant industry and use case; that the necessary instruction and documentation are provided to understand error rates, confidence intervals, and results; and that the model is being used under the conditions and in the manner intended.