SSL En Conformer XLarge

SSL En Conformer XLarge

Logo for SSL En  Conformer XLarge
Self-Supervised Learning (SSL) checkpoints for Conformer XLarge model. These are similar to w2v-Conformer model and can be fine-tuned for Automatic Speech Recognition (ASR).
Latest Version
April 4, 2023
2.37 GB

Model Overview

This collection contains Self-Supervised Learning (SSL) checkpoints for xlarge size versions of Conformer model (around 0.6B parameters). Models are trained using unlabeled english audio with contrastive loss. These are similar to w2v-Conformer XL [3,4] and can be fine-tuned for Automatic Speech Recognition (ASR).

Model Architecture

For details about conformer architecture, refer to [1].


The NeMo toolkit [2] was used for training the models. These model are trained with this example script and this base config.


All the models in this collection are trained using LibriLight corpus (~56k hrs of unlabeled English speech).

How to Use this Model

The pre-trained checkpoints are available in NeMo toolkit [2], and has to be fine-tuned on another labeled dataset for ASR.

To load the checkpoint from NGC

import nemo.collections.asr as nemo_asr
ssl_model = nemo_asr.models.ssl_models.SpeechEncDecSelfSupervisedModel.from_pretrained(model_name='ssl_en_conformer_xlarge')

To continue ssl training on your own dataset, set init_from_pretrained_model and optim in config appropriately and use the script.


To fine-tune using a labeled dataset, refer to this example script for transducer loss and to this example script for using CTC loss.

Briefly, you can load the pre-trained checkpoint into fine-tune model as shown below

# define fine-tune model
asr_model = nemo_asr.models.EncDecRNNTBPEModel(cfg=cfg.model, trainer=trainer)

# load ssl checkpoint
asr_model.load_state_dict(ssl_model.state_dict(), strict=False)

del ssl_model


The list of the available models in this collection is shown in the following table. Performances of the ASR models fine-tuned from these check-points are reported in terms of Word Error Rate (WER%) with greedy decoding.

Version SSL Loss Fine-tune Dataset Fine-tune Model Vocabulary Size LS dev-clean LS dev-other LS test-clean LS test-other
1.10.0 Contrastive LS 100h Conformer-Transducer 128 2.53 4.23 2.51 4.35
1.10.0 Contrastive LS 960h Conformer-Transducer 128 1.56 3.18 1.67 3.21


Since this model was trained on publically available speech datasets, the performance of this model might degrade for speech which includes technical terms, or vernacular that the model has not been trained on. The model might also perform worse for accented speech.


[1] Conformer: Convolution-augmented Transformer for Speech Recognition

[2] NVIDIA NeMo Toolkit

[3] Pushing the Limits of SSL for ASR

[4] W2V-BERT: Combining Contrastive Learning and Masked Language Modeling for Self-Supervised Speech Pre-training


License to use this model is covered by the NGC TERMS OF USE unless another License/Terms Of Use/EULA is clearly specified. By downloading the public and release version of the model, you accept the terms and conditions of the NGC TERMS OF USE.