Joint Intent classification and Slot classification is a task of classifying an Intent
and detecting all relevant Slots (Entities) for this Intent in a query.
For example, in the query: What is the weather in Santa Clara tomorrow morning?
,
we would like to classify the query as a Weather
Intent, and detect Santa Clara
as a Location
slot
and tomorrow morning
as a date_time
slot.
Intents and Slots names are usually task specific and defined as labels in the training data. This is a fundamental step that is executed in any task-driven Conversational Assistant. The primary use case of this model is to jointly identify Intents and Entities in a given user query.
This is a pretrained Bert based model with 2 linear classifier heads on the top of it, one for classifying an intent of the query and another for classifying slots for each token of the query. This model is trained with the combined loss function on the Intent and Slot classification task on the given dataset.
For each query the model will classify it as one the intents from the intent dictionary and for each word of the query it will classify it as one of the slots from the slot dictionary, including out of scope slot for all the remaining words in the query which does not fall in another slot category. Out of scope slot (O) is a part of slot dictionary that the model is trained on.
We used a proprietary data set that was collected via Mechanical Turk to describe different queries in weather domain.
List of the recognized Intents for this model:
List of the recognized Entities:
These model checkpoints are intended to be used with the Train Adapt Optimize (TAO) Toolkit. In order to use these checkpoints, there should be a specification file (.yaml) that specifies hyperparameters, datasets for training and evaluation, and any other information needed for the experiment. For more information on the experiment spec files for each use case, please refer to the TAO Toolkit User Guide.
Note: The model is encrypted and will only operate with the model load key tao-encode
.
!tao intent_slot_classification finetune -e \
-m \
-g
!tao intent_slot_classification evaluate -e \
-m
!tao intent_slot_classification infer -e \
-m
By downloading and using the models and resources packaged with TAO Conversational AI, you would be accepting the terms of the Riva license
NVIDIA’s platforms and application frameworks enable developers to build a wide array of AI applications. Consider potential algorithmic bias when choosing or creating the models being deployed. Work with the model’s developer to ensure that it meets the requirements for the relevant industry and use case; that the necessary instruction and documentation are provided to understand error rates, confidence intervals, and results; and that the model is being used under the conditions and in the manner intended.