NGC | Catalog
CatalogResourcesSpeechtotext Citrinet Notebookspeech-to-text-deployment.ipynb

speech-to-text-deployment.ipynb

TAO - ASR CitriNet Riva Deployment

Train Adapt Optimize (TAO) Toolkit provides the capability to export your model in a format that can deployed using NVIDIA Riva, a highly performant application framework for multi-modal conversational AI services using GPUs.

This tutorial explores taking a .riva model, the result of tao speech_to_text_citrinet command, and leveraging the Riva ServiceMaker framework to aggregate all the necessary artifacts for Riva deployment to a target environment. Once the model is deployed in Riva, you can issue inference requests to the server. We will demonstrate how quick and straightforward this whole process is.


Learning Objectives

In this notebook, you will learn how to:

  • Use Riva ServiceMaker to take a TAO exported .riva and convert it to .rmir
  • Deploy the model(s) locally on the Riva Server
  • Send inference requests from a demo client using Riva API bindings.

Pre-requisites

To follow along, please make sure:

  • You have access to NVIDIA NGC, and are able to download the Riva Quickstart resources
  • Have a .riva model file that you wish to deploy. You can obtain this from tao <task> export (with export_format=RIVA). Please refer the tutorial on Automatic Speech Recognition using Train Adapt Optimize (TAO) Toolkit for more details on training and exporting a .riva model.

Riva ServiceMaker

Servicemaker is the set of tools that aggregates all the necessary artifacts (models, files, configurations, and user settings) for Riva deployment to a target environment. It has two main components as shown below:

1. Riva-build

This step helps build a Riva-ready version of the model. It’s only output is an intermediate format (called a RMIR) of an end to end pipeline for the supported services within Riva. We are taking an ASR CitriNet Model in consideration

riva-build is responsible for the combination of one or more exported models (.riva files) into a single file containing an intermediate format called Riva Model Intermediate Representation (.rmir). This file contains a deployment-agnostic specification of the whole end-to-end pipeline along with all the assets required for the final deployment and inference. Please checkout the documentation to find out more.

In [1]:
# IMPORTANT: UPDATE THESE PATHS 

# ServiceMaker Docker
RIVA_SM_CONTAINER = "<add container name>"

# Directory where the .riva model is stored $MODEL_LOC/*.riva
MODEL_LOC = "<add path to model location>"

# Name of the .riva file
MODEL_NAME = "<add model name>"

# Key that model is encrypted with, while exporting with TAO
KEY = "<add encryption key used for trained model>"
In [2]:
# Get the ServiceMaker docker
! docker pull $RIVA_SM_CONTAINER
In [3]:
# Syntax: riva-build <task-name> output-dir-for-rmir/model.rmir:key dir-for-riva/model.riva:key
! docker run --rm --gpus 0 -v $MODEL_LOC:/data $RIVA_SM_CONTAINER -- \
            riva-build speech_recognition /data/asr.rmir:$KEY /data/$MODEL_NAME:$KEY --offline \
            --chunk_size=1.6 \
            --padding_size=1.6 \
            --ms_per_timestep=80 \
            --greedy_decoder.asr_model_delay=-1 \
            --featurizer.use_utterance_norm_params=False \
            --featurizer.precalc_norm_time_steps=0 \
            --featurizer.precalc_norm_params=False \
            --decoder_type=greedy

2. Riva-deploy

The deployment tool takes as input one or more Riva Model Intermediate Representation (RMIR) files and a target model repository directory. It creates an ensemble configuration specifying the pipeline for the execution and finally writes all those assets to the output model repository directory.

In [4]:
# Syntax: riva-deploy -f dir-for-rmir/model.rmir:key output-dir-for-repository
! docker run --rm --gpus 0 -v $MODEL_LOC:/data $RIVA_SM_CONTAINER -- \
            riva-deploy -f  /data/asr.rmir:$KEY /data/models/

Start Riva Server

Once the model repository is generated, we are ready to start the Riva server. From this step onwards you need to download the Riva QuickStart Resource from NGC. Set the path to the directory here:

In [5]:
# Set the Riva QuickStart directory
RIVA_DIR = "<Path to the uncompressed folder downloaded from quickstart(include the folder name)>"

Next, we modify config.sh to enable relevant Riva services (asr for CitriNet Model), provide the encryption key, and path to the model repository (riva_model_loc) generated in the previous step among other configurations.

For instance, if above the model repository is generated at $MODEL_LOC/models, then you can specify riva_model_loc as the same directory as MODEL_LOC

Pretrained versions of models specified in models_asr/nlp/tts are fetched from NGC. Since we are using our custom model, we can comment it in models_asr (and any others that are not relevant to your use case).

config.sh snippet

# Enable or Disable Riva Services 
service_enabled_asr=true                                                      ## MAKE CHANGES HERE
service_enabled_nlp=false                                                      ## MAKE CHANGES HERE
service_enabled_tts=false                                                     ## MAKE CHANGES HERE

# Specify one or more GPUs to use
# specifying more than one GPU is currently an experimental feature, and may result in undefined behaviours.
gpus_to_use="device=0"

# Specify the encryption key to use to deploy models
MODEL_DEPLOY_KEY="tlt_encode"                                                  ## MAKE CHANGES HERE

# Locations to use for storing models artifacts
#
# If an absolute path is specified, the data will be written to that location
# Otherwise, a docker volume will be used (default).
#
# riva_init.sh will create a `rmir` and `models` directory in the volume or
# path specified. 
#
# RMIR ($riva_model_loc/rmir)
# Riva uses an intermediate representation (RMIR) for models
# that are ready to deploy but not yet fully optimized for deployment. Pretrained
# versions can be obtained from NGC (by specifying NGC models below) and will be
# downloaded to $riva_model_loc/rmir by `riva_init.sh`
# 
# Custom models produced by NeMo or TAO and prepared using riva-build
# may also be copied manually to this location $(riva_model_loc/rmir).
#
# Models ($riva_model_loc/models)
# During the riva_init process, the RMIR files in $riva_model_loc/rmir
# are inspected and optimized for deployment. The optimized versions are
# stored in $riva_model_loc/models. The riva server exclusively uses these
# optimized versions.
riva_model_loc="<add path>"                              ## MAKE CHANGES HERE (Replace with MODEL_LOC)                      
In [6]:
# Ensure you have permission to execute these scripts
! cd $RIVA_DIR && chmod +x ./riva_init.sh && chmod +x ./riva_start.sh
In [7]:
# Run Riva Init. This will fetch the containers/models
# YOU CAN SKIP THIS STEP IF YOU DID RIVA DEPLOY
! cd $RIVA_DIR && ./riva_init.sh config.sh
In [8]:
# Run Riva Start. This will deploy your model(s).
! cd $RIVA_DIR && ./riva_start.sh config.sh

Run Inference

Once the Riva server is up and running with your models, you can send inference requests querying the server.

To send GRPC requests, you can install Riva Python API bindings for client. This is available as a pip .whl with the QuickStart.

In [9]:
# Install client API bindings
! cd $RIVA_DIR && pip install <add .whl file>

Connect to Riva server and run inference

Now we actually query the Riva server, let's get started. The following cell queries the riva server(using grpc) to yield a result.

In [10]:
import argparse
import grpc
import time
import riva_api.audio_pb2 as ra
import riva_api.riva_asr_pb2 as rasr
import riva_api.riva_asr_pb2_grpc as rasr_srv
import wave

audio_file = "<add path to .wav file>"
server = "localhost:50051"

wf = wave.open(audio_file, 'rb')
with open(audio_file, 'rb') as fh:
    data = fh.read()

channel = grpc.insecure_channel(server)
client = rasr_srv.RivaSpeechRecognitionStub(channel)
config = rasr.RecognitionConfig(
    encoding=ra.AudioEncoding.LINEAR_PCM,
    sample_rate_hertz=wf.getframerate(),
    language_code="en-US",
    max_alternatives=1,
    enable_automatic_punctuation=False,
    audio_channel_count=1
)

request = rasr.RecognizeRequest(config=config, audio=data)

response = client.Recognize(request)
print(response)

You can stop all docker container before shutting down the jupyter kernel. Caution: The following command will stop all running containers

In [11]:
! docker stop $(docker ps -a -q)