NGC | Catalog
CatalogModelsSTT En Zh Multilingual Code-Switched FastConformer Transducer L

STT En Zh Multilingual Code-Switched FastConformer Transducer L

For downloads and more information, please view on a desktop device.
Logo for STT En Zh Multilingual Code-Switched FastConformer Transducer L

Description

English + Mandarin Multilingual and Code-Switched Speech Recognition FastConformer Transducer Large Model

Publisher

-

Latest Version

1.22.0

Modified

September 26, 2023

Size

437.6 MB

Model Overview

This collection contains a FastConformer-Transducer large model (around 120M parameters) for Multilingual and Code-Switched speech recongition of English-Mandarin speech. It utilizes a Google SentencePiece [1] tokenizer with a vocabulary size 1024 for English and uses 5000 characters for Mandarin.

It can transcribe audio samples into English or Mandarin or even both English and Mandarin used in the same sentence. The language is detected automatically.

Model Architecture

Conformer-Transducer is the Conformer [2] model and uses RNNT/Transducer loss/decoder. You may find more information on the details here: Conformer Transducer. FastConformer [3] is an optimized version of the Conformer model with 8x depthwise-separable convolutional downsampling. The model is trained with Transducer loss. You may find more information on the details of FastConformer here: Fast-Conformer Model These model

Training

The NeMo toolkit [4] was used for training the models for over several hundred epochs. These model are trained with this example script and this base config. The SentencePiece tokenizers [1] for these models were built using the text transcripts of the train set with this script.

Datasets

The model is trained on a composite dataset (NeMo ASRSET) comprising of several thousand hours of English speech:

  • Librispeech [5] 960 hours of English speech
  • AISHELL-2 (iOS) [6] 1000 hours of Mandarin speech
  • SEAME [7] 100 hours of Mandarin, English and natural intra-sentential Code-Switch data.

Performance

The performance of Automatic Speech Recognition models is measured using Word Error Rate (WER) on English (en), Character Error Rate (zh) on Mandarin (zh), and Mix Error Rate (MER) on Multilingual/Code-Switch en-zh data.

2.4% WER on LibriSpeech test clean (en)

5.5% WER on LibriSpeech test other (en)

6.7% CER on AISHELL2 iOS test (zh)

15.0% MER on SEAME dev set (en-zh)

14.7% MER on SEAME mandarin test (en-zh)

21.7% MER on SEAME singapore english test (en-zh)

How to Use this Model

The model is available for use in the NeMo toolkit [3], and can be used as a pre-trained checkpoint for fine-tuning on another dataset.

Automatically load the model from NGC

import nemo.collections.asr as nemo_asr
asr_model = nemo_asr.models.EncDecRNNTBPEModel.from_pretrained(model_name="stt_enzh_fastconformer_transducer_large_codesw")

Transcribing text with this model

python [NEMO_GIT_FOLDER]/examples/asr/transcribe_speech.py \
  pretrained_name="stt_enzh_fastconformer_transducer_large_codesw" \
  audio_dir="<DIRECTORY CONTAINING AUDIO FILES>"

Input

This model accepts 16000 KHz Mono-channel Audio (wav files) as input.

Output

This model provides transcribed speech as a string for a given audio sample. The output string may contain English or Mandarin characters, depending on the languages used in the audio sample.

Limitations

Since this model was trained on publically available speech datasets, the performance of this model might degrade for speech which includes technical terms, or vernacular that the model has not been trained on. The model might also perform worse for accented speech.

References

[1] Google Sentencepiece Tokenizer

[2] Conformer: Convolution-augmented Transformer for Speech Recognition

[3] Fast Conformer with Linearly Scalable Attention for Efficient Speech Recognition

[4] NVIDIA NeMo Toolkit

[5] LibriSpeech ASR Corpus

[6] AISHELL-2

[7] Mandarin-English Code-Switching in South-East Asia (SEAME)

Licence

License to use this model is covered by the NGC TERMS OF USE unless another License/Terms Of Use/EULA is clearly specified. By downloading the public and release version of the model, you accept the terms and conditions of the NGC TERMS OF USE.