NGC Catalog
CLASSIC
Welcome Guest
Resources
Quick Deploy Question_Answering Models Using Brev.dev

Quick Deploy Question_Answering Models Using Brev.dev

For downloads and more information, please view on a desktop device.
Logo for Quick Deploy Question_Answering Models Using Brev.dev
Description
This tutorial will demonstrate how to train, evaluate, and test three types of models for Question-Answering.
Publisher
NVIDIA
Latest Version
1
Modified
April 9, 2024
Compressed Size
6.05 KB

Train Question-Answering Models using NVIDIA NeMO

This resource contains a Jupyter Notebook that walks though a streamlined approach to training, evaluating, and testing models for Question Answering (QA) tasks, leveraging NVIDIA’s NeMo Framework. Within this notebook, users can explore three QA model types: BERT-like models for Extractive Question Answering, Sequence-to-Sequence (S2S) models like T5/BART for Generative Question Answering, and GPT-like models for advanced generative responses. This adaptation focuses on practical application, guiding users through the process of generating answers from given contexts and queries, with detailed explanations on how to use NeMo.

Deploy now

To streamline your experience and jump directly into a GPU-accelerated environment with this notebook and NeMo pre-installed, click the badge below. Our 1-click deploys are powered by Brev.dev.

Click here to deploy.

Getting started

Use the 1-click deploy link above to set up a machine with NeMO installed. Once the VM is ready, use the Access Notebook button to enter the Jupyter Lab instance

Models

For this notebook, we use two types of question-answering paradigms and three different models:

  • Extractive QA
    • BERT-like model
  • Generative QA
    • Sequence-to-Sequence (T5/BART-like) models
    • GPT-like models

We will be using the SQuAD dataset to showcase the training and inference. We train, test, and deploy all three models for inference and evaluate performance of all three architectures.

NeMO

NVIDIA NeMo Framework is a generative AI framework built for researchers and pytorch developers working on large language models (LLMs), multimodal models (MM), automatic speech recognition (ASR), and text-to-speech synthesis (TTS). NeMO provides a scalable framework to easily design, implement, and scale new AI models using existing pre-trained models and a simple API for configuration.