When deployed, the ASR engine can optionally condition the transcript output on n-gram language models.
These models are simple 4-gram language models trained with Kneser-Ney smoothing using KenLM.
Primary use case intended for these models is automatic speech recognition.
Input Sequence of zero or more words.
Output Likelihood of word sequence.
There are a variety of formats contained within this model archive:
ARPA-formatted Language Models:
-4gram-pruned-0_1_7_9-ru-lm-set-1.0.arpa
KenLM-formatted Binary Language Models
riva_de_asr_set_2.0_4gram.binary
-4gram-pruned-0_1_7_9-ru-lm-set-1.0.bin
Flashlight Decoder Vocabulary Files
dict_vocab.txt
ARPA and KenLM binary formatted files can be used directly by the CTC CPU Decoder.
NA
Currently, TLT cannot train LMs for ASR inference. To train custom LMs for ASR inference, use KenLM and consult the Riva Documentation.
By downloading and using the models and resources packaged with TLT Conversational AI, you would be accepting the terms of the Riva license.