This collection contains a large size versions of Conformer-CTC (around 120M parameters) model that were obtained by finetuning from English SSL-pretrained model on Mozilla Common Voice Esperanto 11.0 dataset. The Esperanto model utilizes a Google SentencePiece  tokenizer with vocabulary size 128, and transcribes speech in lower case Esperanto alphabet along with spaces and apostrophes.
Conformer-CTC model is a non-autoregressive variant of Conformer model  for Automatic Speech Recognition which uses CTC loss/decoding instead of Transducer. You may find more info on the detail of this model here: Conformer-CTC Model.
The NeMo toolkit  was used for finetuning from English SSL model for three hundred epochs. The model is finetuning with this example script and this base config. As pretrained English SSL model we use ssl_en_conformer_large which was trained using LibriLight corpus (~56k hrs of unlabeled English speech).
The tokenizers for these models were built using the text transcripts of the train set with this script.
All training details can be found at the Esperanto ASR example.
All the models in this collection are trained on Mozilla Common Voice Esperanto 11.0 dataset comprising of about 1400 validated hours of Esperanto speech. However, training set consists of a much smaller amount of data, because when forming the train.tsv, dev.tsv and test.tsv, repetitions of texts in train were removed by Mozilla developers.
The tokenizer for this model was built using text corpus provided with the train dataset.
We build a Google Sentencepiece Tokenizer  with the following script :
python [NEMO_GIT_FOLDER]/scripts/tokenizers/process_asr_text_tokenizer.py \ --manifest="train_manifest.json" \ --data_root="<OUTPUT DIRECTORY FOR TOKENIZER>" \ --vocab_size=128 \ --tokenizer="spe" \ --spe_type="bpe" \ --spe_character_coverage=1.0 \ --no_lower_case \ --log
The performance of Automatic Speech Recognition models is measuring using Word Error Rate.
The model obtains the following scores on the following Mozilla Common Voice evaluation datasets:
|Version||Tokenizer||Vocabulary Size||Dev WER||Test WER||Train Dataset|
|1.14.0||SentencePiece BPE||128||2.9||4.8||MCV-11.0 Train set|
The model is available for use in the NeMo toolkit , and can be used as a pre-trained checkpoint for inference or for finetuning on another dataset.
import nemo.collections.asr as nemo_asr asr_model = nemo_asr.models.EncDecCTCModelBPE.from_pretrained(model_name="stt_eo_conformer_ctc_large")
python [NEMO_GIT_FOLDER]/examples/asr/transcribe_speech.py \ pretrained_name="stt_eo_conformer_ctc_large" \ audio_dir="<DIRECTORY CONTAINING AUDIO FILES>"
This model accepts 16000 Hz Mono-channel Audio (wav files) as input.
This model provides transcribed speech as a string for a given audio sample.
Since this model was trained on publically available speech datasets, the performance of this model might degrade for speech which includes technical terms, or vernacular that the model has not been trained on. The model might also perform worse for accented speech.
 Google Sentencepiece Tokenizer
 Conformer: Convolution-augmented Transformer for Speech Recognition