NGC Catalog
CLASSIC
Welcome Guest
Models
RIVA Parakeet-CTC-XXL-1.1B ASR English - ASR set 8.1

RIVA Parakeet-CTC-XXL-1.1B ASR English - ASR set 8.1

For downloads and more information, please view on a desktop device.
Logo for RIVA Parakeet-CTC-XXL-1.1B ASR English - ASR set 8.1
Description
English (en-US) Parakeet-CTC-XXL-1.1B ASR model trained on ASR set 8.1
Publisher
NVIDIA
Latest Version
trainable_v8.1
Modified
December 20, 2024
Size
3.71 GB

Speech Recognition: Parakeet

Description

Parakeet-CTC-XXL-1.1B (around 1.1B parameters) [1] is trained on ASRSet with over 150000 hours of English (en-US) speech. The model transcribes speech in lower case English alphabet along with spaces and apostrophes. This model is ready for commercial use.

License/Terms of Use

NVIDIA AI Foundation Models Community License Agreement

References

[1] Fast Conformer with Linearly Scalable Attention for Efficient Speech Recognition
[2] Fast-Conformer-CTC Model
[3] Conformer: Convolution-augmented Transformer for Speech Recognition

Model Architecture

Architecture Type: Parakeet-CTC (also known as FastConformer-CTC) [1], [2] which is an optimized version of Conformer model [3] with 8x depthwise-separable convolutional downsampling with CTC loss
Network Architecture: Parakeet-CTC-XXL-1.1B

Input

Input Type(s): Audio
Input Format(s): wav
Other Properties Related to Input: Maximum Length in seconds specific to GPU Memory, No Pre-Processing Needed, Mono channel is required

Output

Output Type(s): Text String in English
Output Parameters: 1-Dimension
Other Properties Related to Output: No Maximum Character Length, Does not handle special characters

How to Use this Model

The Riva Quick Start Guide is recommended as the starting point for trying out Riva models. For more information on using this model with Riva Speech Services, see the Riva User Guide.

Suggested Reading

Refer to the Riva documentation for more information.

Software Integration

Runtime Engine(s):

  • Riva 2.15.1 or higher

Supported Hardware Microarchitecture Compatibility:

  • NVIDIA Ampere
  • NVIDIA Hopper
  • NVIDIA Jetson
  • NVIDIA Turing
  • NVIDIA Volta

[Preferred/Supported] Operating System(s):

  • Linux
  • Linux 4 Tegra

Model Version(s):

Parakeet-CTC-XXL-1.1b_spe1024_en-US_8.1

Training & Evaluation

Training Dataset

** Data Collection Method by dataset

  • Human

** Labeling Method by dataset

  • Human, Synthetic labels generated by OpenAI's opensource Whisper-v3 model

Properties (Quantity, Dataset Descriptions, Sensor(s)):

In excess of 150000 hours of English (en-US) speech comprised of a dynamic blend of public and internal proprietary and customer datasets normalized to have lower-cased, unpunctuated, and spoken forms in text.

Evaluation Dataset

** Data Collection Method by dataset

  • Human

** Labeling Method by dataset

  • Human

Properties (Quantity, Dataset Descriptions, Sensor(s)):

A dynamic blend of public and internal proprietary and customer datasets normalized to have lower-cased, unpunctuated, and spoken forms in text.

Inference

Engine: Triton
Test Hardware:

  • NVIDIA A10
  • NVIDIA A100
  • NVIDIA A30
  • NVIDIA H100
  • NVIDIA Jetson Orin
  • NVIDIA L4
  • NVIDIA L40
  • NVIDIA Turing T4
  • NVIDIA Volta V100

Ethical Considerations

NVIDIA believes Trustworthy AI is a shared responsibility and we have established policies and practices to enable development for a wide array of AI applications. When downloaded or used in accordance with our terms of service, developers should work with their supporting model team to ensure this model meets requirements for the relevant industry and use case and addresses unforeseen product misuse. For more detailed information on ethical considerations for this model, please see the Model Card++ Explainability, Bias, Safety & Security, and Privacy Subcards here. Please report security vulnerabilities or NVIDIA AI Concerns here.