FastSpeech2HifiGanE2E is an end-to-end, non-autoregressive model that generates audio from text. It combines FastSpeech2 and HiFiGan into one model and is traned jointly in an end-to-end manner.
The FastSpeech2 portion consists of the same transformer-based encoder, and a 1D-convolution-based variance adaptor as the original FastSpeech2 model. The HiFiGan portion takes the discriminator from HiFiGan and uses it to generate audio from the output of the fastspeech2 portion. No spectrograms are used in the training of the model. All losses are taken from HiFiGan plus additional losses for the variance adaptor.
This model is trained on LJSpeech sampled at 22050Hz, and has been tested on generating female English voices with an American accent. Supplementary data (durations, pitches, energies) were calculated using dataset preprocessing scripts that can be found in the NeMo library.
No performance information available at this time.
This model can be automatically loaded from NGC.
import soundfile as sf from nemo.collections.tts.models import FastSpeech2HifiGanE2EModel # Load the model from NGC model = FastSpeech2HifiGanE2EModel.from_pretrained(model_name="tts_en_e2e_fastspeech2hifigan") # Run inference tokens = model.parse("Hey, I can speak!") audio = model.convert_text_to_waveform(tokens=tokens) # Save the audio to disk in a file called speech.wav sf.write("speech.wav", audio.to('cpu').numpy(), 22050)
This model accepts batches of text.
This model generates audio.
This model outputs audio at 22050Hz.
1.0.0 (current): The original version released with NeMo 1.0.0.
FastSpeech 2/2s paper: https://arxiv.org/abs/2006.04558
LJSpeech preprocessing scripts: https://github.com/NVIDIA/NeMo/tree/v1.0.0/scripts/dataset_processing/ljspeech