NGC | Catalog
Welcome Guest
CatalogResourcesRiva Skills Embedded Quick Start

Riva Skills Embedded Quick Start

For downloads and more information, please view on a desktop device.
Logo for Riva Skills Embedded Quick Start

Description

Scripts and utilities for getting started with Riva Speech Skills on Embedded platforms

Publisher

NVIDIA

Use Case

Other

Framework

Other

Latest Version

2.1.0

Modified

May 2, 2022

Compressed Size

52.74 KB

Quick Start Guide for Embedded Platforms

Riva Speech Skills supports two architectures, Linux x86_64 and Linux ARM64. These are referred to as data center (x86_64) and embedded (ARM64). These instructions are applicable to embedded users.

Prerequisites

Before using Riva skills, ensure you meet the following prerequisites:

  1. You have access and are logged into NVIDIA NGC. For step-by-step instructions, refer to the NGC Getting Started Guide.

  2. You have access to an NVIDIA Jetson AGX Xavier or an NVIDIA Jetson NX Xavier. For more information, refer to the Support Matrix.

  3. You have installed NVIDIA JetPack version 4.6.1 on Jetson Xavier. For more information, refer to the Support Matrix.

  4. You have ~7 GB free disk space on Jetson as required by the default containers and models. If you are deploying your custom Riva model intermediate representation (RMIR) models, the additional disk space required is ~5 GB plus the size of custom RMIR models.

  5. You have enabled the following power modes on the Jetson platform. These modes activate all CPU cores and clock the CPU/GPU at maximum frequency for achieving best performance.

    sudo nvpmodel -m 0 (Jetson Xavier AGX, mode MAXN)
    sudo nvpmodel -m 2 (Jetson Xavier NX, mode MODE_15W_6CORE)
    

Getting Started with Riva for Embedded Platforms

  1. Download the Riva Quick Start scripts. You can either use the command-line interface or you can download the scripts directly from your browser. Click the Download drop-down button in the upper right corner and select:

    • CLI - the download command is copied. Ensure you have the NGC CLI tool installed. Once installed, open the command prompt and paste the copied command to start your download.

    • Broswer (Direct Download) - the download begins in a location of your choosing.

  2. Initialize and start Riva. The initialization step downloads and prepares Docker images and models. The start script launches the server.

    :::{note} This process can take up to an hour on an average internet connection. On embedded platforms, pre-optimized models for the GPU on the NVIDIA Jetson are downloaded. :::

    Optional: Modify the config.sh file within the quickstart directory with your preferred configuration. Options include which models to retrieve from NGC, where to store them, which GPU to use if more than one is installed on your system (refer to Local (Docker) for more details), and locations of SSL/TLS certificate and key files if using a secure connection.

    cd riva_quickstart_arm64_v2.1.0
    

    To use a USB device for audio input/output, connect it to the Jetson platform so it gets auto mounted into the container.

    Initialize and start Riva

    bash riva_init.sh
    bash riva_start.sh
    
  3. From inside the server container, try the different services using the provided Jupyter notebooks.

    jupyter notebook --ip=0.0.0.0 --allow-root --notebook-dir=/work/notebooks
    

    To run the Jupyter notebooks, connect a browser window to the correct port (8888 by default) of the external IP address of the embedded platform.

  4. Shutdown the server when finished. After you've completed these steps and experimented with inferencing, run the riva_stop.sh script to stop the server.

For further details on how to customize a local deployment, refer to the Local (Docker) section.

Transcribe Audio Files with Riva

For Automatic Speech Recognition (ASR), run the following commands from inside the Riva server container to perform streaming and offline transcription of audio files. If using SSL/TLS, ensure to include the --ssl_server_cert /ssl/server.crt option.

  1. For offline recognition, run:

    riva_asr_client --audio_file=/work/wav/en-US_sample.wav
    
  2. For streaming recognition, run:

    riva_streaming_asr_client --audio_file=/work/wav/en-US_sample.wav
    

Synthesize Speech with Riva

From within the Riva server container, run the following command to synthesize the audio files.

riva_tts_client --voice_name=English-US-Female-1 \
                --text="Hello, this is a speech synthesizer." \
                --audio_file=/work/wav/output.wav

The audio files are stored in the /work/wav directory.

The streaming API can be tested by using the command-line option --online=true. However, there is no difference between both options with the command-line client since it saves the entire audio to a .wav file.

Riva Collections

The Riva Collection contains the Riva Speech Server and the Riva Speech Client containers, the Riva Quick Start scripts resources, and then Riva Speech Skills Helm chart.

Suggested Reading

For the latest product documentation, supported hardware and software, and release notes, refer to the Riva User's Guide.

License

By downloading and using Riva software, you accept the terms and conditions of this license.