NGC | Catalog
CatalogModelsSpeech Synthesis HiFi-GAN

Speech Synthesis HiFi-GAN

Logo for Speech Synthesis HiFi-GAN
Description
GAN-based waveform generator from mel-spectrograms.
Publisher
NVIDIA
Latest Version
deployable_v1.0
Modified
April 4, 2023
Size
49.49 MB

Speech Synthesis: HifiGAN Model Card

Model overview

HifiGAN is a neural vocoder model for text-to-speech applications. It is intended as the second part of a two-stage speech synthesis pipeline, with a mel-spectrogram generator such as FastPitch as the first stage.

Model architecture

HifiGAN is a neural vocoder based on a generative adversarial network framework, During training, the model uses a powerful discriminator consisting of small sub-discriminators, each one focusing on specific periodic parts of a raw waveform. The generator is very fast and has a small footprint, while producing high quality speech.

Training

Dataset

This model is trained on LJSpeech sampled at 22050Hz.

How to use this model

HifiGAN is intended to be used as the second part of a two stage speech synthesis pipeline. HifiGAN takes a mel spectrogram and returns audio.

Input: Mel spectrogram of shape (batch x mel_channels x time)

Output: Audio of shape (batch x time)

Limitations

N/A

References

HifiGAN paper: https://arxiv.org/abs/2010.05646

License

By downloading and using the models and resources packaged with TAO Conversational AI, you would be accepting the terms of the Riva license

Ethical AI

NVIDIA’s platforms and application frameworks enable developers to build a wide array of AI applications. Consider potential algorithmic bias when choosing or creating the models being deployed. Work with the model’s developer to ensure that it meets the requirements for the relevant industry and use case; that the necessary instruction and documentation are provided to understand error rates, confidence intervals, and results; and that the model is being used under the conditions and in the manner intended.