Mixer-TTS is non-autoregressive model for mel-spectrogram generation. The model is based on the MLP-Mixer architecture adapted for speech synthesis. It contains pitch and duration predictors, with the latter being trained with an unsupervised TTS alignment framework.
For more information about the model architecture, see the Mixer-TTS paper.
This model is trained on LJSpeech sampled at 22050Hz, and has been tested on generating female English voices with an American accent.
No performance information available at this time.
This model can be automatically loaded from NGC.
NOTE: In order to generate audio, you also need a 22050Hz vocoder from NeMo. This example uses the HiFi-GAN model which was additionally fine-tuned on Mixer-TTS outputs.
# Load Mixer-TTS from nemo.collections.tts.models import MixerTTSModel spec_generator = MixerTTSModel.from_pretrained("tts_en_lj_mixertts") # Load vocoder from nemo.collections.tts.models import Vocoder model = Vocoder.from_pretrained(model_name="tts_en_lj_hifigan_ft_mixertts") # Generate audio import soundfile as sf parsed = spec_generator.parse("You can type your sentence here to get nemo to produce speech.") spectrogram = spec_generator.generate_spectrogram(tokens=parsed) audio = model.convert_spectrogram_to_audio(spec=spectrogram) # Save the audio to disk in a file called speech.wav sf.write("speech.wav", audio.to('cpu').numpy(), 22050)
This model accepts batches of text.
This model generates mel spectrograms.
This checkpoint only works well with vocoders that were trained on 22050Hz data. Otherwise, the generated audio may be scratchy or choppy-sounding.
1.6.0: The original version of Mixer-TTS model which was released with NeMo 1.6.0.
Mixer-TTS paper: https://arxiv.org/pdf/2110.03584.pdf