NGC | Catalog
CatalogModelsWaveGlow LJSpeech

WaveGlow LJSpeech

For downloads and more information, please view on a desktop device.
Logo for WaveGlow LJSpeech

Description

Model checkpoints for the WaveGlow model trained with NeMo.

Publisher

NVIDIA

Latest Version

2

Modified

April 4, 2023

Size

1023.65 MB

Overview

This is a checkpoint for the Waveglow model that was trained in NeMo on LJspeech for 1200 epochs. It was trained with Apex/Amp optimization level O1, with 8 * 32GB V100, and with a batch size of 12 per GPU for a total batch size of 96.

It contains the checkpoints for the Waveglow Neural Modules and the yaml config file:

  • WaveGlowNM.pt

Documentation

Refer to documentation at https://github.com/NVIDIA/NeMo

Usage example: Please download both the NeMo Tacotron 2 and the NeMo WaveGlow checkpoints. Put the checkpoints into a checkpoint_dir, and run tts_infer.py (from NeMo's TTS examples).

python tts_infer.py --spec_model tacotron2 --spec_model_config=$checkpoint_dir/tacotron2.yaml --spec_model_load_dir=$checkpoint_dir --vocoder waveglow --vocoder_model_config=$checkpoint_dir/waveglow.yaml--vocoder_model_load_dir=$checkpoint_dir --eval_dataset=.json