When deployed, the ASR engine can optionally condition the transcript output on n-gram language models.
These models are simple 3- and 4-gram language models trained with Kneser-Ney smoothing using KenLM.
Primary use case intended for these models is automatic speech recognition.
Input Sequence of zero or more words.
Output Likelihood of word sequence.
There are a variety of formats contained within this model archive:
ARPA-formatted Language Models:
-3-gram.pruned.3e-7.arpa
mixed_lm-lower.arpa
KenLM-formatted Binary Language Models
mixed-lower.binary
Rescoring Language Models
G.mixed_lm.3-gram.pruned.3e-7.carpa
G.mixed_lm.carpa
FST-formatted Language Models
TLG.mixed_lm.3-gram.pruned.3e-7.fst
Vocabulary Files
words.mixed_lm.3-gram.pruned.3e-7.fst
words.mixed_lm.txt
ARPA and KenLM binary formatted files can be used directly by the CTC CPU Decoder.
The GPU decoder uses a FST-formatted language model (derived from the pruned n-gram
model) and then optionally uses the carpa
-formatted LMs for rescoring.
The mixed language model provided here is English-only and is trained on a mix of transcriptions from LibriSpeech, Switchboard and Fisher datasets.
Currently, TLT cannot train LMs for ASR inference. To train custom LMs for ASR inference, use KenLM and consult the Jarvis Documentation.
By downloading and using the models and resources packaged with TLT Conversational AI, you would be accepting the terms of the Jarvis license.