MelGAN is a generative adversarial network (GAN) model that generates audio from mel spectrograms. This checkpoint implements the full-band MelGAN as described in the Multi-band MelGAN paper .
The MelGAN generator uses transposed convolutions to upscale by the mel spectrogram to audio.
This model is trained on LJSpeech sampled at 22050Hz, and has been tested on generating female English voices with an American accent. All NeMo models are trained in accordance with the model yaml. In particular, this model was trained on 8 16GB V100 gpus for 3000 epochs with a batch size of 64.
No performance information available at this time.
This model can be automatically loaded from NGC. NOTE: In order to generate audio, you also need a spectrogram generator from NeMo. This example uses the FastPitch model.
# Load spectrogram generator from nemo.collections.tts.models import FastPitchModel spec_generator = FastPitchModel.from_pretrained("tts_en_fastpitch") # Load Melgan from nemo.collections.tts.models import MelGanModel model = MelGanModel.from_pretrained(model_name="tts_melgan") # Generate audio import soundfile as sf parsed = spec_generator.parse("You can type your sentence here to get nemo to produce speech.") spectrogram = spec_generator.generate_spectrogram(tokens=parsed) audio = model.convert_spectrogram_to_audio(spec=spectrogram) # Save the audio to disk in a file called speech.wav sf.write("speech.wav", audio.to('cpu').numpy(), 22050)
This model accepts batches of mel spectrograms.
This model outputs audio at 22050Hz.
There are no known limitations at this time.
1.0.0 (current): An updated version of melgan that standardizes mel spectrogram generation across NeMo models.
1.0.0rc1: The original version that was released with NeMo 1.0.0.rc1
MelGAN paper: https://arxiv.org/abs/1910.06711
Multi-band MelGAN paper: https://arxiv.org/abs/2005.05106