The T5-TTS model leverages an encoder-decoder transformer architecture for speech synthesis. The encoder processes text input, and the auto-regressive decoder takes a reference speech prompt from the target speaker. The auto-regressive decoder then generates speech tokens by attending to the encoder’s output through the transformer’s cross-attention heads. These cross-attention heads implicitly learn to align text and speech. However, their robustness can falter, especially when the input text contains repeated words.
Architecture Type: Transformer + Generative Adversarial Network (GAN)
Network Architecture: T5TTS + AudioCodec
For T5TTS (1st Stage): Text Strings in English
Other Properties Related to Input: 400 Character Text String Limit
For AudioCodec (2nd Stage): Audio of shape (batch x time) in wav format
Other Parameters Related to Output: Mono, Encoded 16 bit audio; 20 Second Maximum Length; Depending on input, this model can output a female or a male voice for American English.
Runtime Engine(s): Riva 2.18.0 or greater
Supported Hardware Platform(s):
Supported Operating System(s):
Engine: Triton
Test Hardware:
NVIDIA believes Trustworthy AI is a shared responsibility and we have established policies and practices to enable development for a wide array of AI applications. When downloaded or used in accordance with our terms of service, developers should work with their supporting model team to ensure this model meets requirements for the relevant industry and use case and addresses unforeseen product misuse. For more detailed information on ethical considerations for this model, please see the Model Card++ Explainability, Bias, Safety & Security, and Privacy Subcards. Please report security vulnerabilities or NVIDIA AI Concerns here.
By downloading and using the models and resources packaged with Riva Conversational AI, you accept the terms of the Riva license.