HiFi-GAN is a neural vocoder model for text-to-speech applications. It is intended as the second part of a two-stage speech synthesis pipeline, with a mel-spectrogram generator such as FastPitch as the first stage.
HiFi-GAN is a neural vocoder based on a generative adversarial network framework. During training, the model uses a powerful discriminator consisting of small sub-discriminators, each one focusing on specific periodic parts of a raw waveform. The generator is very fast and has a small footprint, while producing high quality speech.
This model is trained on a proprietary dataset sampled at 44100Hz, and can be used to generate Mandarin voices with an American accent. This model supports 1 male voice and 1 female voice. The female voice comes with neutral, calm emotions. The male voice comes with neutral, calm, happy, fearful, and angry emotions. Each emotion is accessed as a speaker. For example, Female-Calm, Male-Happy, and so on.
HiFi-GAN is intended to be used as the second part of a two stage speech synthesis pipeline. HiFi-GAN takes a mel-spectrogram and returns audio.
Mel-spectrogram of shape (batch x mel_channels x time)
Audio of shape (batch x time)
Refer to the Riva documentation for more information.
By downloading and using the models and resources packaged with Riva Conversational AI, you accept the terms of the Riva license.
NVIDIA’s platforms and application frameworks enable developers to build a wide array of AI applications. Consider potential algorithmic bias when choosing or creating the models being deployed. Work with the model’s developer to ensure that it meets the requirements for the relevant industry and use case; that the necessary instruction and documentation are provided to understand error rates, confidence intervals, and results; and that the model is being used under the conditions and in the manner intended.