FastPitchHifiGanE2E is an end-to-end, non-autoregressive model that generates audio from text. It combines FastPitch and HiFiGan into one model and is traned jointly in an end-to-end manner.
The FastPitch portion consists of the same transformer-based encoder, pitch predictor, and duration predictor as the original FastPitch model. The HiFiGan portion takes the discriminator from HiFiGan and uses it to generate audio from the output of the FastPitch portion. No spectrograms are used in the training of the model. All losses are taken from HiFiGan plus additional losses for the pitch and duration predictors.
This model is trained on LJSpeech sampled at 22050Hz, and has been tested on generating female English voices with an American accent.
No performance information available at this time.
This model can be automatically loaded from NGC.
import soundfile as sf from nemo.collections.tts.models import FastPitchHifiGanE2EModel # Load the model from NGC model = FastPitchHifiGanE2EModel.from_pretrained(model_name="tts_en_e2e_fastpitchhifigan") # Run inference tokens = model.parse("Hey, I can speak!") audio = model.convert_text_to_waveform(tokens=tokens) # Save the audio to disk in a file called speech.wav sf.write("speech.wav", audio.to('cpu').numpy(), 22050)
This model accepts batches of text.
This model generates audio.
This model outputs audio at 22050Hz.
1.0.0 (current): The original version released with NeMo 1.0.0.