LM, or Language model, Language Models estimate the probability distriubtion of sequences of words. In general, this is a large task, with arbitrary sequence lengths, so it is often assumed that the probability of a word is only dependent on the N words preceding it. This is known as an N-Gram Language Model. An N-Gram model of order N saves the counts of all observed sequences of words in the training data of lengths one (known as unigrams) to lengths N. During inference, if an N-gram sequence not seen during training is queried, the sequence is then simplified to the probability of the N-1 last words, weighted by a calculated backoff probability.
The best place to get started with TAO Toolkit - LM would be the TAO - N-Gram LM jupyter notebooks sample enclosed in this sample.
This resource has 1 notebook included.
If you are a seasoned Conversation AI developer we recommend installing TAO and referring to the TAO documentation for detailed information.
Please make sure to install the following before proceeding further:
Note: A compatible NVIDIA GPU would be required.
We recommend that you install TAO Toolkit inside a virtual environment. The steps to do the same are as follows
virtualenv -p python3 source /bin/activate pip install jupyter notebook # If you need to run the notebooks
TAO Toolkit is a python package that is hosted in nvidia python package index. You may install by using python’s package manager, pip.
pip install nvidia-pyindex pip install nvidia-tao
To download the jupyter notebook please:
ngc registry resource download-version "nvidia/tao/ngram_lm_notebook:v1.0"
jupyter notebook --ip 0.0.0.0 --allow-root --port 8888
By downloading and using the models and resources packaged with TAO Toolkit Conversational AI, you would be accepting the terms of the Riva license