NGC | Catalog
CatalogModelsSTT Eo Conformer-Transducer Large

STT Eo Conformer-Transducer Large

Logo for STT Eo Conformer-Transducer Large
Conformer-Transducer-Large model for Esperanto Automatic Speech Recognition, finetuning from English SSL model on Mozilla Common Voice Esperanto 11.0 dataset.
Latest Version
April 4, 2023
455.65 MB

Model Overview

This collection contains a large size versions of Conformer-Transducer (around 120M parameters) model that were obtained by finetuning from English SSL-pretrained model on Mozilla Common Voice Esperanto 11.0 dataset. The Esperanto model utilizes a Google SentencePiece [1] tokenizer with vocabulary size 128, and transcribes speech in lower case Esperanto alphabet along with spaces and apostrophes.

Model Architecture

Conformer-Transducer model is an autoregressive variant of Conformer model [2] for Automatic Speech Recognition which uses Transducer loss/decoding. You may find more info on the detail of this model here: Conformer-Transducer Model.


The NeMo toolkit [3] was used for finetuning from English SSL model for three hundred epochs. The model is finetuning with this example script and this base config. As pretrained English SSL model we use ssl_en_conformer_large which was trained using LibriLight corpus (~56k hrs of unlabeled English speech).

The tokenizers for these models were built using the text transcripts of the train set with this script.


All the models in this collection are trained on a Mozilla Common Voice Esperanto 11.0 dataset comprising of about 1400 validated hours of Esperanto speech. However, training set consists of a much smaller amount of data, because when forming the train.tsv, dev.tsv and test.tsv, repetitions of texts in train were removed by Mozilla developers.

  • Train set: ~250 hours.
  • Dev set: ~25 hours.
  • Test: ~25 hours.

Tokenizer Construction

The tokenizer for this model was built using text corpus provided with the train dataset.

We build a Google Sentencepiece Tokenizer [1] with the following script :

python [NEMO_GIT_FOLDER]/scripts/tokenizers/ \
  --manifest="train_manifest.json" \
  --vocab_size=128 \
  --tokenizer="spe" \
  --spe_type="bpe" \
  --spe_character_coverage=1.0 \
  --no_lower_case \


The performance of Automatic Speech Recognition models is measuring using Word Error Rate.

The model obtains the following scores on the following Mozilla Common Voice evaluation datasets:

Version Tokenizer Vocabulary Size Dev WER Test WER Train Dataset
1.14.0 SentencePiece BPE 128 2.4 4.0 MCV-11.0 Train set

How to Use this Model

The model is available for use in the NeMo toolkit [3], and can be used as a pre-trained checkpoint for inference or for finetuning on another dataset.

Automatically load the model from NGC

import nemo.collections.asr as nemo_asr
asr_model = nemo_asr.models.EncDecRNNTBPEModel.from_pretrained(model_name="stt_eo_conformer_transducer_large")

Transcribing text with this model

python [NEMO_GIT_FOLDER]/examples/asr/ \
  pretrained_name="stt_eo_conformer_transducer_large" \


This model accepts 16000 Hz Mono-channel Audio (wav files) as input.


This model provides transcribed speech as a string for a given audio sample.


Since this model was trained on publically available speech datasets, the performance of this model might degrade for speech which includes technical terms, or vernacular that the model has not been trained on. The model might also perform worse for accented speech.


[1] Google Sentencepiece Tokenizer

[2] Conformer: Convolution-augmented Transformer for Speech Recognition

[3] NVIDIA NeMo Toolkit


License to use this model is covered by the NGC TERMS OF USE unless another License/Terms Of Use/EULA is clearly specified. By downloading the public and release version of the model, you accept the terms and conditions of the NGC TERMS OF USE.