UniGlow is a Glow-based (alternatively flow-based) model that generates audio from mel spectrograms.
UniGlow improves upon WaveGlow by reducing the amount of parameters by 12x. WaveGlow has 12 glow layers, whereas UniGlow has 12 glow layers but each layer shares the same parameters.
This model is trained on LJSpeech sampled at 22050Hz, and has been tested on generating female English voices with an American accent.
No performance information available at this time.
This model can be automatically loaded from NGC.
NOTE: In order to generate audio, you also need a spectrogram generator from NeMo. This example uses the FastPitch model.
# Load spectrogram generator from nemo.collections.tts.models import FastPitchModel spec_generator = FastPitchModel.from_pretrained("tts_en_fastpitch") # Load uniglow from nemo.collections.tts.models import UniGlowModel vocoder = UniGlowModel.from_pretrained(model_name="tts_uniglow") # Generate audio import soundfile as sf parsed = spec_generator.parse("You can type your sentence here to get nemo to produce speech.") spectrogram = spec_generator.generate_spectrogram(tokens=parsed) audio = vocoder.convert_spectrogram_to_audio(spec=spectrogram) # Save the audio to disk in a file called speech.wav sf.write("speech.wav", audio.to('cpu').numpy(), 22050)
This model accepts batches of mel spectrograms.
This model outputs audio at 22050Hz.
There are no known limitations at this time.
1.0.0 (current): The original version that was released with NeMo 1.0.0