Joint Intent and Slot Classification DistilBert

Joint Intent and Slot Classification DistilBert

Logo for Joint Intent and Slot Classification DistilBert
Intent and Slot classification of the queries for the misty bot with DistilBert model trained on weather, smalltalk and POI (places of interest) data.
Latest Version
October 6, 2023
254.01 MB

Intent and Slot Classification DistilBERT Model Card

Model Overview

Joint Intent classification and Slot classification is a task of classifying an Intent and detecting all relevant Slots (Entities) for this Intent in a query. For example, in the query: What is the weather in Santa Clara tomorrow morning? , we would like to classify the query as a Weather Intent, and detect Santa Clara as a Location slot and tomorrow morning as a date_time slot.

Intended Use

Intents and Slots names are usually task specific and defined as labels in the training data. This is a fundamental step that is executed in any task-driven Conversational Assistant. The primary use case of this model is to jointly identify Intents and Entities in a given user query.

Model Architecture

This is a pretrained Distil Bert based model with 2 linear classifier heads on the top of it, one for classifying an intent of the query and another for classifying slots for each token of the query. This model is trained with the combined loss function on the Intent and Slot classification task on the given dataset.

For each query the model will classify it as one the intents from the intent dictionary and for each word of the query it will classify it as one of the slots from the slot dictionary, including out of scope slot for all the remaining words in the query which does not fall in another slot category. Out of scope slot (O) is a part of slot dictionary that the model is trained on.

Training Data

We used a proprietary data set that have queries releated to weather, smalltalk, map and POI domains.

List of the recognized Intents for this model:

  • weather.temperature, weather.temperature_yes_no
  • weather.rainfall, weather.rainfall_yes_no
  • weather.snow, weather.snow_yes_no
  • weather.humidity, weather.humidity_yes_no
  • weather.windspeed
  • weather.sunny
  • weather.cloudy
  • context.continue
  • navigation.startnavigation, navigation.startnavigationpoi
  • navigation.stopnavigation
  • navigation.navigationavoidhighways
  • navigation.istollsonroute
  • navigation.getspeedlimitonroute
  • navigation.getdistance, navigation.getdistancepoi
  • navigation.gettraveltime, navigation.gettraveltimepoi
  • navigation.getextrastoptime
  • navigation.geteta
  • navigation.showdirection, navigation.showdirectionpoi
  • navigation.showdirectionavoidhighways
  • navigation.showmap, navigation.showmappoi
  • navigation.getnumber
  • navigation.getrating
  • navigation.isclosed
  • nomatch
  • smalltalk.personality_hello
  • smalltalk.personality_nice_to_meet_you
  • smalltalk.personality_bot_age
  • smalltalk.personality_bots_owner
  • smalltalk.personality_bot_creator
  • smalltalk.personality_bot_is_happy
  • smalltalk.personality_what_bot_can_do
  • smalltalk.bot_personality_weather_interest
  • smalltalk.personality_bot_favorite_activity
  • smalltalk.personality_bot_name
  • smalltalk.personality_bot_challenge
  • smalltalk.personality_bot_location
  • smalltalk.personality_whats_going_on
  • smalltalk.personality_how_is_bot_doing
  • smalltalk.personality_can_bot_do_physical_activity
  • smalltalk.personality_bot_is_boring
  • smalltalk.personality_ask_me_question
  • smalltalk.personality_can_bot_do_action
  • smalltalk.personality_bot_gender
  • smalltalk.personality_bot_family
  • smalltalk.personality_what_does_bot_eat
  • smalltalk.personality_met_other_bot
  • smalltalk.personality_opinion_other_bot
  • smalltalk.personality_bot_love_life
  • smalltalk.personality_philosophical_question
  • smalltalk.personality_bot_user_comparision
  • smalltalk.personality_goodbye
  • smalltalk.personality_greet_user
  • smalltalk.personality_who_is_smarter
  • smalltalk.personality_opinion_on_ai
  • smalltalk.personality_is_user_beautiful
  • smalltalk.personality_ai_conquer_world
  • smalltalk.personality_bot_is_smart
  • smalltalk.personality_bot_available
  • smalltalk.personality_do_something_funny
  • smalltalk.personality_tell_me_a_joke
  • smalltalk.personality_sing_a_song
  • smalltalk.personality_bot_is_not_funny
  • smalltalk.personality_bot_reapeting_same_thing
  • smalltalk.personality_nice_talking_to_you
  • smalltalk.personality_happy_x_day
  • smalltalk.personality_thank_bot
  • smalltalk.personality_bot_is_fired
  • smalltalk.personality_goodwork_bot
  • smalltalk.personality_bot_is_useless
  • smalltalk.personality_user_dont_understand_bot
  • smalltalk.personality_user_apologize
  • smalltalk.personality_help_from_bot
  • smalltalk.personality_bot_affirmation
  • smalltalk.user_loves_bot
  • smalltalk.user_feel_emotion
  • smalltalk.bot_personality_language_bot_speak
  • smalltalk.bot_personality_in_free_time
  • smalltalk.bot_personality_about_bot

List of the recognized Entities:

  • O (out of scope)
  • weathertime
  • weatherplace
  • temperatureunit
  • windspeedunit
  • rainfallunit
  • snowunit
  • weatherforecastdaily
  • season_rain
  • season_cold
  • season_sunny
  • season_spring
  • cuisinetype
  • poiplace
  • poisortcriteria
  • destinationplace
  • sourceplace
  • speedunit
  • distanceunit
  • distance
  • navigationmethod
  • unknown_location
  • bot_age
  • bot_birthday
  • hobby
  • favorite_color
  • favorite_food
  • favorite_animal
  • greettime.morning
  • greettime.night
  • greettime.evening
  • smalltalk.festival
  • smalltalk.occassion


Misty model is trained on dataset of multiple domains weather, poi, smalltalk and nomatch to identify outlier queries. This model is trained on over 20000 unique queries from various domain/intents for 50 epochs. It's performance is evaluated on an held out set of around 3500 unique queries. On the evaluation set it has show f1 score of 97.59 and f1 of 99.69 in slot identification.

How to Use This Model

These model checkpoints are intended to be used with the Train Adapt Optimize (TAO) Toolkit. In order to use these checkpoints, there should be a specification file (.yaml) that specifies hyperparameters, datasets for training and evaluation, and any other information needed for the experiment. For more information on the experiment spec files for each use case, please refer to the TAO Toolkit User Guide.

Note: The model is encrypted and will only operate with the model load key tlt_encode.

To fine-tune from a model checkpoint (.tlt), use the following command (`` parameter should be a valid path to the file that specifies the fine-tuning hyperparameters, the dataset to fine-tune on, the dataset to evaluate on, epochs number):

!tao intent_slot_classification finetune -e <experiment_spec> \
 -m <model_checkpoint> \
 -g <num_gpus>

To evaluate an existing dataset using a model checkpoint (.tlt), use the following command (`` parameter should be a valid path to the file that specifies the dataset that is being evaluated):

!tao intent_slot_classification evaluate -e <experiment_spec> \
 -m <model_checkpoint>

To evaluate a model checkpoint (.tlt) on a set of query examples, use the following command (`` parameter should be a valid path to the file that specifies list of queries to test):

!tao intent_slot_classification infer -e <experiment_spec> \
 -m <model_checkpoint>


The model architecture is based on the paper:


By downloading and using the models and resources packaged with TAO Conversational AI, you would be accepting the terms of the Riva license

Suggested reading

More information on about TAO Toolkit and pre-trained models can be found at the [NVIDIA Developer Zone] Read the TAO Toolkit getting Started guide and release notes If you have any questions or feedback, please refer to the discussions on TAO Toolkit Developer Forums Deploy your model for Production using Riva. Learn more about Riva framework

Ethical AI

NVIDIA’s platforms and application frameworks enable developers to build a wide array of AI applications. Consider potential algorithmic bias when choosing or creating the models being deployed. Work with the model’s developer to ensure that it meets the requirements for the relevant industry and use case; that the necessary instruction and documentation are provided to understand error rates, confidence intervals, and results; and that the model is being used under the conditions and in the manner intended.