STT Fa FastConformer Hybrid Transducer-CTC Large

STT Fa FastConformer Hybrid Transducer-CTC Large

Logo for STT Fa FastConformer Hybrid Transducer-CTC Large
This collection contains the large version (114M) of the Persian speech recognition model with a FastConformer encoder and a Hybrid decoder (joint RNNT-CTC loss). The model has a vocab size of 1024.
Latest Version
November 7, 2023
405.37 MB

Model Overview

This collection contains the Persian FastConformer Hybrid (Transducer and CTC) Large model (around 114M parameters) trained on Mozilla CommonVoice Persian with around 335 hours of Persian speech.

It utilizes a Google SentencePiece [1] tokenizer with a vocabulary size of 1024. It transcribes text in Persian alphabet without punctuation.

Model Architecture

FastConformer is an optimized version of the Conformer model [2] with 8x depthwise-separable convolutional downsampling. The model is trained in a multitask setup with joint Transducer and CTC decoder loss. You may find more information on the details of FastConformer here: Fast-Conformer Model and about Hybrid Transducer-CTC training here: Hybrid Transducer-CTC.


The NeMo toolkit [3] was used for training the models for over several hundred epochs. These model are trained with this example script and this base config.

The tokenizers for these models were built using the text transcripts of the train set with this script.

This model was initialized with the weights of English FastConformer Hybrid (Transducer and CTC) Large P&C model and fine-tuned to Persian data.


All the models in this collection are trained on Mozilla CommonVoice Persian Corpus 15.0.

In order to leverage the entire validated data portion, the standard train/dev/test splits were discarded and replaced with custom splits. The custom splits may be reproduced by:

  • grouping utterances with identical transcript and sorting utterances (ascendingly) by the (transcript occupancy, transcript) pairs;
  • selecting the first 10540 utterances for the test set (to maintain the original size);
  • selecting the second 10540 utterances for the dev set;
  • selecting the remaining data for the training set.

The transcripts were additionally normalized according to the following script (empty results were discarded):

import unicodedata
import string

SKIP = set(
    + [
        "=",  # occurs only 2x in utterance (transl.): "twenty = xx"
        "ā",  # occurs only 4x together with "š"
        # Arabic letters
        "ة",  # TEH MARBUTA

    # "(laughter)" in Farsi
    # ASCII
    # Unicode punctuation?
    # Unicode whitespace?
    # Other

    "أ": "ا",
    "ۀ": "ە",
    "ك": "ک",
    "ي": "ی",
    "ى": "ی",
    "ﯽ": "ی",
    "ﻮ": "و",
    "ے": "ی",
    "ﺒ": "ب",
    "ﻢ": "ﻡ",
    "٬": " ",
    "ە": "ه",

def maybe_normalize(text: str) -> str | None:

    # Skip selected with banned characters
    if set(text) & SKIP:
        return None  # skip this

    # Remove hashtags - they are not being read in Farsi CV
    text = " ".join(w for w in text.split() if not w.startswith("#"))

    # Replace selected characters with others
    for lhs, rhs in REPLACEMENTS.items():
        text = text.replace(lhs, rhs)

    # Replace selected characters with empty strings
    for tok in DISCARD:
        text = text.replace(tok, "")

    # Unify the symbols that have the same meaning but different Unicode representation.
    text = unicodedata.normalize("NFKC", text)

    # Remove hamza's that were not merged with any letter by NFKC.
    text = text.replace("ء", "")

    # Remove double whitespace etc.
    return " ".join(t for t in text.split() if t)

Tokenizer Construction

The tokenizer for this model was built using text corpus provided with the train dataset.

We build a Google Sentencepiece Tokenizer [1] with the following script :

python [NEMO_GIT_FOLDER]/scripts/tokenizers/ \
  --manifest="train_manifest.json" \
  --vocab_size=1024 \
  --tokenizer="spe" \
  --spe_type="bpe" \
  --spe_character_coverage=1.0 \
  --spe_max_sentencepiece_length=4 \


The performance of Automatic Speech Recognition models is measuring using Character Error Rate (CER) and Word Error Rate (WER).

The model obtains the following scores on our custom dev and test splits of Mozilla CommonVoice Persian 15.0:

Model %WER/CER dev %WER/CER test
RNNT head 15.44 / 3.89 15.48 / 4.63
CTC head 13.18 / 3.38 13.16 / 3.85

How to Use this Model

The model is available for use in the NeMo toolkit [3], and can be used as a pre-trained checkpoint for inference or for fine-tuning on another dataset.

Automatically load the model from NGC

import nemo.collections.asr as nemo_asr
asr_model = nemo_asr.models.EncDecHybridRNNTCTCBPEModel.from_pretrained(model_name="stt_fa_fastconformer_hybrid_large")

Transcribing text with this model

Using Transducer mode inference:

python [NEMO_GIT_FOLDER]/examples/asr/ \
  pretrained_name="stt_fa_fastconformer_hybrid_large" \

Using CTC mode inference:

python [NEMO_GIT_FOLDER]/examples/asr/ \
  pretrained_name="stt_fa_fastconformer_hybrid_large" \


This model accepts 16 kHz mono-channel audio (wav files) as input.


This model provides transcribed speech as a string for a given audio sample.


Since this model was trained on publicly available speech datasets, the performance of this model might degrade for speech which includes technical terms, or vernacular that the model has not been trained on. The model might also perform worse for accented speech.


[1] Google Sentencepiece Tokenizer

[2] ContextNet: Improving Convolutional Neural Networks for Automatic Speech Recognition with Global Context

[3] NVIDIA NeMo Toolkit


License to use this model is covered by the NGC TERMS OF USE unless another License/Terms Of Use/EULA is clearly specified. By downloading the public and release version of the model, you accept the terms and conditions of the NGC TERMS OF USE.