This collection contains the Persian FastConformer Hybrid (Transducer and CTC) Large model (around 114M parameters) trained on Mozilla CommonVoice Persian with around 335 hours of Persian speech.
It utilizes a Google SentencePiece [1] tokenizer with a vocabulary size of 1024. It transcribes text in Persian alphabet without punctuation.
FastConformer is an optimized version of the Conformer model [2] with 8x depthwise-separable convolutional downsampling. The model is trained in a multitask setup with joint Transducer and CTC decoder loss. You may find more information on the details of FastConformer here: Fast-Conformer Model and about Hybrid Transducer-CTC training here: Hybrid Transducer-CTC.
The NeMo toolkit [3] was used for training the models for over several hundred epochs. These model are trained with this example script and this base config.
The tokenizers for these models were built using the text transcripts of the train set with this script.
This model was initialized with the weights of English FastConformer Hybrid (Transducer and CTC) Large P&C model and fine-tuned to Persian data.
All the models in this collection are trained on Mozilla CommonVoice Persian Corpus 15.0.
In order to leverage the entire validated data portion, the standard train/dev/test splits were discarded and replaced with custom splits. The custom splits may be reproduced by:
The transcripts were additionally normalized according to the following script (empty results were discarded):
import unicodedata
import string
SKIP = set(
list(string.ascii_letters)
+ [
"=", # occurs only 2x in utterance (transl.): "twenty = xx"
"ā", # occurs only 4x together with "š"
"š",
# Arabic letters
"ة", # TEH MARBUTA
]
)
DISCARD = [
# "(laughter)" in Farsi
"(خنده)",
# ASCII
"!",
'"',
"#",
"&",
"'",
"(",
")",
",",
"-",
".",
":",
";",
# Unicode punctuation?
"–",
"“",
"”",
"…",
"؟",
"،",
"؛",
"ـ",
# Unicode whitespace?
"ً",
"ٌ",
"َ",
"ُ",
"ِ",
"ّ",
"ْ",
"ٔ",
# Other
"«",
"»",
]
REPLACEMENTS = {
"أ": "ا",
"ۀ": "ە",
"ك": "ک",
"ي": "ی",
"ى": "ی",
"ﯽ": "ی",
"ﻮ": "و",
"ے": "ی",
"ﺒ": "ب",
"ﻢ": "ﻡ",
"٬": " ",
"ە": "ه",
}
def maybe_normalize(text: str) -> str | None:
# Skip selected with banned characters
if set(text) & SKIP:
return None # skip this
# Remove hashtags - they are not being read in Farsi CV
text = " ".join(w for w in text.split() if not w.startswith("#"))
# Replace selected characters with others
for lhs, rhs in REPLACEMENTS.items():
text = text.replace(lhs, rhs)
# Replace selected characters with empty strings
for tok in DISCARD:
text = text.replace(tok, "")
# Unify the symbols that have the same meaning but different Unicode representation.
text = unicodedata.normalize("NFKC", text)
# Remove hamza's that were not merged with any letter by NFKC.
text = text.replace("ء", "")
# Remove double whitespace etc.
return " ".join(t for t in text.split() if t)
The tokenizer for this model was built using text corpus provided with the train dataset.
We build a Google Sentencepiece Tokenizer [1] with the following script :
python [NEMO_GIT_FOLDER]/scripts/tokenizers/process_asr_text_tokenizer.py \
--manifest="train_manifest.json" \
--data_root="<OUTPUT DIRECTORY FOR TOKENIZER>" \
--vocab_size=1024 \
--tokenizer="spe" \
--spe_type="bpe" \
--spe_character_coverage=1.0 \
--spe_max_sentencepiece_length=4 \
--log
The performance of Automatic Speech Recognition models is measuring using Character Error Rate (CER) and Word Error Rate (WER).
The model obtains the following scores on our custom dev and test splits of Mozilla CommonVoice Persian 15.0:
Model | %WER/CER dev | %WER/CER test |
---|---|---|
RNNT head | 15.44 / 3.89 | 15.48 / 4.63 |
CTC head | 13.18 / 3.38 | 13.16 / 3.85 |
The model is available for use in the NeMo toolkit [3], and can be used as a pre-trained checkpoint for inference or for fine-tuning on another dataset.
import nemo.collections.asr as nemo_asr
asr_model = nemo_asr.models.EncDecHybridRNNTCTCBPEModel.from_pretrained(model_name="stt_fa_fastconformer_hybrid_large")
Using Transducer mode inference:
python [NEMO_GIT_FOLDER]/examples/asr/transcribe_speech.py \
pretrained_name="stt_fa_fastconformer_hybrid_large" \
audio_dir="<DIRECTORY CONTAINING AUDIO FILES>"
Using CTC mode inference:
python [NEMO_GIT_FOLDER]/examples/asr/transcribe_speech.py \
pretrained_name="stt_fa_fastconformer_hybrid_large" \
audio_dir="<DIRECTORY CONTAINING AUDIO FILES>" \
decoder_type="ctc"
This model accepts 16 kHz mono-channel audio (wav files) as input.
This model provides transcribed speech as a string for a given audio sample.
Since this model was trained on publicly available speech datasets, the performance of this model might degrade for speech which includes technical terms, or vernacular that the model has not been trained on. The model might also perform worse for accented speech.
[1] Google Sentencepiece Tokenizer
License to use this model is covered by the NGC TERMS OF USE unless another License/Terms Of Use/EULA is clearly specified. By downloading the public and release version of the model, you accept the terms and conditions of the NGC TERMS OF USE.