NGC | Catalog
Welcome Guest
CatalogModelsSTT Mr Conformer-CTC Medium

STT Mr Conformer-CTC Medium

For downloads and more information, please view on a desktop device.
Logo for STT Mr Conformer-CTC Medium

Description

Conformer-CTC-Medium model for Marathi Automatic Speech Recognition, Trained on ULCA Marathi Labelled Dataset.

Publisher

NVIDIA

Use Case

Other

Framework

PyTorch

Latest Version

1.6.0

Modified

March 17, 2022

Size

4.41 GB

Model Overview

This collection contains medium size versions of Conformer-CTC (around 30M parameters) trained on ULCA Marathi Corpus with around 1300 hours of marathi speech. The model transcribes speech in marathi characters along with spaces.

Model Architecture

Conformer-CTC model is a non-autoregressive variant of Conformer model [1] for Automatic Speech Recognition which uses CTC loss/decoding instead of Transducer. You may find more info on the detail of this model here: Conformer-CTC Model

Training

The NeMo toolkit [3] was used for training the models for over several hundred epochs. These model are trained with this example script and this base config.

The tokenizers for these models were built using the text transcripts of the train set with this script.

The checkpoint of the language model used as the neural rescorer can be found here. You may find more info on how to train and use language models for ASR models here: ASR Language Modeling

Datasets

All the models in this collection are trained on ULCA Marathi Labelled Dataset (~1300 hrs)

Tokenizer Construction

The tokenizer for this model was built using text corpus provided with the train dataset.

We build a token set with the following script:

python [NEMO_GIT_FOLDER]/scripts/tokenizers/process_asr_text_tokenizer.py \
 --manifest="train_manifest.json" \
 --data_root="" \
 --vocab_size=512 \
 --tokenizer="spe" \
 --spe_type="unigram" \
 --spe_character_coverage=1.0 \
 --log

Performance The list of the available models in this collection is shown in the following table. Performances of the ASR models are reported in terms of Word Error Rate (WER%) with greedy decoding and 6-Gram KenLM trained on AI4Bharat Corpus.

6-Gram KenLM with 128 beam size with n_gram_alpha=1.5, n_gram_beta=2.0.

  • 7.78 % WER / 3.05 % CER on Interspeech MUCS 2021 Blind Testset

Greedy Decoding Scores:

  • 14.79 % WER / 5.36 % CER on Interspeech MUCS 2021 Blind Testset

How to Use this Model

The model is available for use in the NeMo toolkit [3], and can be used as a pre-trained checkpoint for inference or for fine-tuning on another dataset.

Automatically load the model from NGC

import nemo.collections.asr as nemo_asr
asr_model = nemo_asr.models.EncDecCTCModelBPE.from_pretrained(model_name="stt_mr_conformer_ctc_medium")

Transcribing text with this model

python [NEMO_GIT_FOLDER]/examples/asr/transcribe_speech.py \
 pretrained_name="stt_mr_conformer_ctc_medium" \
 audio_dir=""

Input

This model accepts 16000 KHz Mono-channel Audio (wav files) as input.

Output

This model provides transcribed speech as a string for a given audio sample.

Limitations

Since this model was trained on publically available speech datasets, the performance of this model might degrade for speech which includes technical terms, or vernacular that the model has not been trained on. The model might also perform worse for accented speech.

References

[1] Conformer: Convolution-augmented Transformer for Speech Recognition

[2] Google Sentencepiece Tokenizer

[3] NVIDIA NeMo Toolkit

Licence

License to use this model is covered by the NGC TERMS OF USE unless another License/Terms Of Use/EULA is clearly specified. By downloading the public and release version of the model, you accept the terms and conditions of the NGC TERMS OF USE.