This model can be used for recognizing the intent of a query in English.
This model consists of a pretrained BERT base uncased model  followed by a 2-layer sequence classification head.
The NeMo toolkit  was used for training this model for three epochs.
The model was trained on the MNLI (Mult-Genre Natural Language Inference) dataset  from: https://dl.fbaipublicfiles.com/glue/data/MNLI.zip.
The performance of the model was tested on the MNLI dev sets. MNLI contains two dev sets, matched and mismatched, which contain genres seen or not seen during training, respectively. This model achieves an accuracy of 84.9% and 84.8% on the matched and mismatched dev sets, respectively.
The model is available for use in the NeMo toolkit , and can be used as a pre-trained checkpoint for inference or for fine-tuning on another dataset.
import nemo.collections.nlp as nemo_nlp model = nemo_nlp.models.ZeroShotIntentModel.from_pretrained(model_name="zeroshotintent_en_bert_base_uncased")
queries = [ "What is the weather in Santa Clara tomorrow morning?", "I'd like a veggie burger and fries", "Play the latest Taylor Swift album" ] candidate_labels = ['Food order', 'Weather query', "Play music"] predictions = model.predict(queries, candidate_labels)
predict method of the model accepts two lists of strings. The first list is the list of queries to be classified. The second list is the list of candidate labels.
predict method returns a list of dictionaries containing one dictionary per input query. Each dictionary has keys "sentence", "labels", and "scores". "sentence" contains the input query, and "labels" and "scores" are parallel lists (with each score corresponding to the label at the same index), sorted from highest to lowest score.
No known limitations at this time.
 Devlin, J. et al. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
 Williams, A. et al. A Broad-Coverage Challenge Corpus for Sentence Understanding through Inference