RIVA Parakeet-CTC-XXL-1.1B ASR Multilingual with Universal Tokenizer (around 1.1B parameters) [1] is trained on ASR Set with over 90,000 hours of speech. A universal tokenizer is trained to support all languages. The model transcribes 25 languages (English (en-US, en-GB), Spanish (es-US, es-ES), German (de-DE), French (fr-FR, fr-CA), Italian (it-IT), Arabic (ar-AR), Japanese (ja-JP), Korean (ko-KR), Portuguese (pt-BR, pt-PT), Russian (ru-RU), Hindi (hi-IN), Dutch (nl-NL), Danish (da-DK), Norwegian Nynorsk (nn-NO), Norwegian Bokmal (nb-NO), Czech (cs-CZ), Polish (pl-PL), Swedish (sv-SE), Thai (th-TH), Turkish (tr-TR), Hebrew (he-IL)) in upper case and lower case alphabets along with punctuations, spaces, and apostrophes.
This model is ready for commercial use.
NVIDIA AI Foundation Models Community License Agreement
[1] Fast Conformer with Linearly Scalable Attention for Efficient Speech Recognition
[2] Fast-Conformer-CTC Model
[3] Conformer: Convolution-augmented Transformer for Speech Recognition
Architecture Type: Parakeet-CTC (also known as FastConformer-CTC) [1], [2] which is an optimized version of Conformer model [3] with 8x depthwise-separable convolutional downsampling with CTC loss
Network Architecture: Parakeet-CTC-XXL-1.1B
Input Type(s): Audio
Input Format(s): wav
Input Parameters: 1-Dimension
Other Properties Related to Input: Maximum Length in seconds specific to GPU Memory, No Pre-Processing Needed, Mono channel is required
Output Type(s): Text
Output Format: String
Output Parameters: 1-Dimension
Other Properties Related to Output: No Maximum Character Length, Does not handle special characters
The Riva Quick Start Guide is recommended as the starting point for trying out Riva models. For more information on using this model with Riva Speech Services, see the Riva User Guide.
Refer to the Riva documentation for more information.
Runtime Engine(s):
Supported Hardware Microarchitecture Compatibility:
[Preferred/Supported] Operating System(s):
Parakeet-CTC-XXL-1.1b_universal_spe8.5k_1.0
** Data Collection Method by dataset
** Labeling Method by dataset
Properties:
This model is trained on over 90,000 hours of speech in 25 languages (English(US, GB), Spanish(US, ES), German, French, Italian, Arabic, Japanese, Korean, Portuguese (Brazil), Russian, Hindi, French (Canada), Dutch, Danish, Norwegian Nynorsk, Norwegian Bokmal, Czech, Polish, Swedish, Thai, Turkish, Portuguese(Portugal), Hebrew) speech comprised of a dynamic blend of public and internal proprietary and customer datasets normalized to have upper-cased, lower-cased, punctuated, and spoken forms in text.
** Data Collection Method by dataset
** Labeling Method by dataset
Properties:
A dynamic blend of public and internal proprietary and customer datasets normalized to have upper-cased, lower-cased, punctuated, and spoken forms in text.
Engine: Triton
Test Hardware:
NVIDIA believes Trustworthy AI is a shared responsibility and we have established policies and practices to enable development for a wide array of AI applications. When downloaded or used in accordance with our terms of service, developers should work with their supporting model team to ensure this model meets requirements for the relevant industry and use case and addresses unforeseen product misuse.
For more detailed information on ethical considerations for this model, please see the Model Card++ Explainability, Bias, Safety & Security, and Privacy Subcards here.
Please report security vulnerabilities or NVIDIA AI Concerns here.