NGC | Catalog
CatalogModelsDomain Classification English Bert

Domain Classification English Bert

Logo for Domain Classification English Bert
Domain classification of the query for weather chat bot.
Latest Version
October 6, 2023
420.37 MB

TextClassification Model Card =========================================================

Model Overview --------------

Text classification model is useful for text classification problems such as sentiment analysis or domain detection for dialogue systems. Provided model here is trained to classify the given query into 1 of 4 domains described below to use it as an initial step in the interactive weather chat bot, which was presented in GTC 2020 keynote.

Intended Use ------------

Text Classification Model can be used for domain classification as the first step in the dialogue systems, to route query according to the appropriate domain. This classification is a task specific according to the domains and examples provided in the training data. Usually in practical settings you need to take this model (pretrained Bert model) and train it on you own dataset.

Model Architecture ------------------

Our text classification model uses a pretrained BERT model (or other BERT-like models) followed by a classification layer on the output of the first token ([CLS]).

Training Data -------------

We used a proprietary data set that was collected via Mechanical Turk to describe large variety of queries that fall in one of the next 4 domains:

  • weather (all weather related queries that triggered call to weather API)
  • meteorology (questions about meteorology topic that went for IR+QA route)
  • personality (questions about personality that went to chit chat route)
  • nomatch (all other queries that does not fall to any of other categories)

Evaluation ----------

Training dataset included 2150 example of queries divided for 4 domains described above. We got around 95% domain classification accuracy for this data.

How to Use This Model ---------------------

These model checkpoints are intended to be used with the Train Adapt Optimize (TAO) Toolkit. In order to use these checkpoints, there should be a specification file (.yaml) that specifies hyperparameters, datasets for training and evaluation, and any other information needed for the experiment. For more information on the experiment spec files for each use case, please refer to the TAO Toolkit User Guide.

Note: The model is encrypted and will only operate with the model load key tao-encode.

  • To fine-tune from a model checkpoint (.tlt), use the following command (`` parameter should be a valid path to the file that specifies the fine-tuning hyperparameters, the dataset to fine-tune on, the dataset to evaluate on, epochs number):
!tao text_classification finetune -e \
 -m \
  • To evaluate an existing dataset using a model checkpoint (.tlt), use the following command (`` parameter should be a valid path to the file that specifies the dataset that is being evaluated):
!tao text_classification evaluate -e \
  • To evaluate a model checkpoint (.tlt) on a set of query examples, use the following command (`` parameter should be a valid path to the file that specifies list of queries to test):
!tao text_classification infer -e \

References ----------

License -------

By downloading and using the models and resources packaged with TAO Conversational AI, you would be accepting the terms of the Riva license

Suggested reading -----------------

Ethical AI ----------

NVIDIA’s platforms and application frameworks enable developers to build a wide array of AI applications. Consider potential algorithmic bias when choosing or creating the models being deployed. Work with the model’s developer to ensure that it meets the requirements for the relevant industry and use case; that the necessary instruction and documentation are provided to understand error rates, confidence intervals, and results; and that the model is being used under the conditions and in the manner intended.