This collection contains large size versions of Conformer-CTC (around 120M parameters) trained on Mozilla Common Voice 10.0 Belarusian dataset with around 500 hours of Belarusian speech. The model transcribes speech in lower case Belarusian alphabet, note that 'i' is differs from english 'i'.
Trained or fine-tuned NeMo models (with the file extenstion
.nemo) can be converted to Riva models (with the file extension
.riva) and then deployed. Here is a pre-trained Conformer-CTC speech-to-text (STT) Riva model.
Conformer-CTC model is a non-autoregressive variant of Conformer model  for Automatic Speech Recognition which uses CTC loss/decoding instead of Transducer. You may find more info on the detail of this model here: Conformer-CTC Model.
The tokenizers for these models were built using the text transcripts of the train set with this script.
As start point pretrained "STT En conformer-CTC Large" was taken, and then fine-tuned to Belarusian.
Model was trained on MCV-10-Be dataset with 465 hours in train set and it was evaluated on 26 hours in test set, 25 in dev set.
Performances of the ASR models are reported in terms of Word Error Rate (WER%) with greedy decoding. WER on dev is 4.8%
The model is available for use in the NeMo toolkit , and can be used as a pre-trained checkpoint for inference or for fine-tuning on another dataset.
import nemo.collections.asr as nemo_asr asr_model = nemo_asr.models.EncDecCTCModelBPE.from_pretrained(model_name="stt_be_conformer_ctc_large")
python [NEMO_GIT_FOLDER]/examples/asr/transcribe_speech.py \ pretrained_name="stt_be_conformer_ctc_large" \ audio_dir=""
This model accepts 16 kHz mono-channel audio (wav files) as input.
This model provides transcribed speech as a string for a given audio sample.
Since all models are trained on just MCV-10 dataset, the performance of this model might degrade for speech which includes technical terms, or vernacular that the model has not been trained on. The model might also perform worse for accented speech.