RADTTS is a mel-spectrogram generator, designed to be used as the first part of a neural text-to-speech system in conjunction with a neural vocoder. This model uses the International Phonetic Alphabet (IPA) for inference and training instead of ARPABET.
RADTTS is a parallel flow-based generative network for text-to-speech synthesis. It extends prior parallel approaches by additionally modeling speech rhythm as a separate generative distribution to facilitate variable token duration during inference.
This model is trained on a proprietary dataset sampled at 44100Hz, and can be used to generate English voices with an American accent. This model supports 1 male voice and 1 female voice.
RADTTS is intended to be used as the first part of a two stage speech synthesis pipeline. RADTTS takes text and produces a mel spectrogram. The second stage takes the generated mel spectrogram and returns audio.
English text strings
Mel spectrogram of shape (batch x mel_channels x time)
RADTTS paper: https://openreview.net/pdf?id=0NQwnnwAORi
By downloading and using the models and resources packaged with Riva Conversational AI, you would be accepting the terms of the Riva license
NVIDIA’s platforms and application frameworks enable developers to build a wide array of AI applications. Consider potential algorithmic bias when choosing or creating the models being deployed. Work with the model’s developer to ensure that it meets the requirements for the relevant industry and use case; that the necessary instruction and documentation are provided to understand error rates, confidence intervals, and results; and that the model is being used under the conditions and in the manner intended.