Citrinet-1024 models with kernel scaling factor (gamma) of 25%, which has been trained on the ASR Set dataset with over 1500 hours of French speech. Includes two models wtih separate tokenization sets, along with accompanying language models (n=4).
Utilizes a Google SentencePiece [1] tokenizer with vocabulary size 1024, and transcribes text in lower case French alphabet along with spaces, apostrophes, and hyphens. Secondary model and associated language model (indicated with "no_hyphen" infix) omits hyphen from tokenization.
Citrinet is a deep residual convolutional neural network architecture that is optimized for Automatic Speech Recognition tasks. There are many variants of the Citrinet family of models, which are further discussed in the paper [2].
These models were trained on a composite dataset comprising over fifteen hundred hours of speech, compiled from various publicly available sources. The NeMo toolkit [3] was used for training this model over several hundred epochs on multiple GPUs.
While training this model, we used the following cleaned datasets:
Both models use same dataset, excluding a preprocessing step to strip hyphen from data for secondary model's training.
The tokenizer for this model was built using text corpus provided with the train dataset.
We build two Google Sentencepiece Tokenizer [1] with the following script :
python [NEMO_GIT_FOLDER]/scripts/tokenizers/process_asr_text_tokenizer.py \
--manifest="train_manifest.json" \
--data_root="" \
--vocab_size=1024 \
--tokenizer="spe" \
--spe_type="unigram" \
--spe_character_coverage=1.0 \
--no_lower_case \
--log
The performance of Automatic Speech Recognition models is measuring using Word Error Rate. Since this dataset is trained on multiple domains and a much larger corpus, it will generally perform better at transcribing audio in general.
The latest model obtains the following greedy scores on the following evaluation datasets
With 128 beam search and 4gram KenLM model:
Note that these evaluation datasets have been filtered and preprocessed to only contain French alphabet characters and are removed of punctuation outside of hyphenation and apostrophe. For the secondary model trained without hyphenation, scores are slightly improved (see version sheet).
The model is available for use in the NeMo toolkit [3], and can be used as a pre-trained checkpoint for inference or for fine-tuning on another dataset.
Automatically load the model from NGC
import nemo.collections.asr as nemo_asr
asr_model = nemo_asr.models.EncDecCTCModelBPE.from_pretrained(model_name="stt_fr_citrinet_1024_gamma_0_25")
Transcribing text with this model
python [NEMO_GIT_FOLDER]/examples/asr/transcribe_speech.py \
pretrained_name="stt_fr_citrinet_1024_gamma_0_25" \
audio_dir="<DIRECTORY CONTAINING AUDIO FILES>"
This model accepts 16000 KHz Mono-channel Audio (wav files) as input.
This model provides transcribed speech as a string for a given audio sample.
Since this model was trained on publicly available speech datasets, the performance of this model might degrade for speech which includes technical terms, or vernacular that the model has not been trained on. The model might also perform worse for accented speech.
Further, as this data contains orthography from pre- and post- 1990 reform, transcriptions may vary in style. If consistency is required, downstream processing or finetuning may be required. If exact orthography is not necessary, then using secondary model is advised.
[1] Google Sentencepiece Tokenizer
License to use this model is covered by the NGC TERMS OF USE unless another License/Terms Of Use/EULA is clearly specified. By downloading the public and release version of the model, you accept the terms and conditions of the NGC TERMS OF USE.