NGC Catalog
CLASSIC
Welcome Guest
Collections
LLMs optimized for RTX PCs

LLMs optimized for RTX PCs

For contents of this collection and more information, please view on a desktop device.
Logo for LLMs optimized for RTX PCs
Features
Description
A collection of TensorRT-LLM accelerated Windows RTX PC LLM models.
Curator
NVIDIA
Modified
March 14, 2025
Containers
Sorry, your browser does not support inline SVG.
Helm Charts
Sorry, your browser does not support inline SVG.
Models
Sorry, your browser does not support inline SVG.
Resources
Sorry, your browser does not support inline SVG.

This is a collection of TensorRT-LLM accelerated Windows RTX PC LLM models. These models can be deployed on NVIDIA RTX GPUs with TensorRT-LLM.

TensorRT-LLM provides developers with an easy-to-use Python API to define Large Language Models (LLMs) and build TensorRT engines that contain state-of-the-art optimizations to perform inference efficiently on NVIDIA GPUs. TensorRT-LLM also contains components to create Python and C++ runtimes that execute those TensorRT engines.

Please check here for more information on TensorRT-LLM. Get started with TensorRT-LLM on Windows now here.