Many AI applications have common needs: classification, object detection, language translation, text-to-speech, recommender engines, sentiment analysis, and more. When developing applications with these capabilities, it is much faster to start with a model that is pre-trained and then tune it for a specific use case. The NGC catalog offers pre-trained models for a variety of common AI tasks that are optimized for NVIDIA Tensor Core GPUs, and can be easily re-trained by updating just a few layers, saving valuable time.
Sort: Last Modified
STT En Conformer-CTC Large
Model
Conformer-CTC-Large model for English Automatic Speech Recognition, Trained on NeMo ASRSET
STT En Conformer-CTC Medium
Model
Conformer-CTC-Medium model for English Automatic Speech Recognition, Trained on NeMo ASRSET
STT En Conformer-CTC Small
Model
Conformer-CTC-Small model for English Automatic Speech Recognition, Trained on NeMo ASRSET
STT En Conformer-Transducer Large
Model
Conformer-Transducer-Large model for English Automatic Speech Recognition, Trained on NeMo ASRSET
STT En Conformer-Transducer Small
Model
Conformer-Transducer-Small model for English Automatic Speech Recognition, Trained on NeMo ASRSET
STT En Conformer-Transducer Medium
Model
Conformer-Transducer-Medium model for English Automatic Speech Recognition, Trained on NeMo ASRSET
STT En Conformer-CTC Large LibriSpeech
Model
Conformer-CTC-Large model for English Automatic Speech Recognition, Trained with NeMo on LibriSpeech dataset
STT En Conformer-CTC Medium LibriSpeech
Model
Conformer-CTC-Medium model for English Automatic Speech Recognition, Trained with NeMo on LibriSpeech dataset
STT En Conformer-CTC Small LibriSpeech
Model
Conformer-CTC-Small model for English Automatic Speech Recognition, Trained with NeMo on LibriSpeech dataset
Riva ASR English LM
Model
Base English n-gram LM trained on LibriSpeech, Switchboard and Fisher
nnUNet PyTorch checkpoint 2D AMP
Model
nnUNet2d PyTorch checkpoint trained with AMP on fold 2