Citrinet-1024 model with kernel scaling factor (gamma) of 25%, which has been trained on the open source Aishell-2 Mandarin Chinese corpus.
It utilizes a character encoding scheme, and transcribes text in the standard character set that is provided in the Aishell-2 Mandard Corpus.
Citrinet is a deep residual convolutional neural network architecture that is optimized for Automatic Speech Recognition tasks. There are many variants of the Citrinet family of models, which are further discussed in the paper .
This model was initially trained on the roughly 42,000 hours of speech from the Multilingual LibriSpeech  English corpus, then fine-tuned on the open source Aishell-2  corpus consisting of about 1000 hours transcribed Mandarin speech. The NeMo toolkit  was used for training this model over several hundred epochs on multiple GPUs.
While training this model, we used the following datasets:
The performance of Automatic Speech Recognition models is measuring using Character Error Rate. Since this dataset is pre-trained on a much larger speech corpus, and fine-tuned on this dataset, it will generally perform better at transcribing audio.
The model obtains the following scores on the following evaluation datasets -
Note that these scores on Aishell-2 are not particularly indicative of the quality of transcriptions that models trained on ASR Set will achieve, but they are a useful proxy.
The model is available for use in the NeMo toolkit , and can be used as a pre-trained checkpoint for inference or for fine-tuning on another dataset.
import nemo.collections.asr as nemo_asr asr_model = nemo_asr.models.EncDecCTCModel.from_pretrained(model_name="stt_zh_citrinet_1024_gamma_0_25")
python [NEMO_GIT_FOLDER]/examples/asr/transcribe_speech.py \ pretrained_name="stt_zh_citrinet_1024_gamma_0_25" \ audio_dir=""
This model accepts 16000 KHz Mono-channel Audio (wav files) as input.
This model provides transcribed speech as a string for a given audio sample.
Since this model was trained on publically available speech datasets, the performance of this model might degrade for speech which includes technical terms, or vernacular that the model has not been trained on. The model might also perform worse for accented speech.