This is a collection of TensorRT-LLM accelerated Windows RTX PC LLM models. These models can be deployed on NVIDIA RTX GPUs with TensorRT-LLM.
TensorRT-LLM provides developers with an easy-to-use Python API to define Large Language Models (LLMs) and build TensorRT engines that contain state-of-the-art optimizations to perform inference efficiently on NVIDIA GPUs. TensorRT-LLM also contains components to create Python and C++ runtimes that execute those TensorRT engines.
Please check here for more information on TensorRT-LLM. Get started with TensorRT-LLM on Windows now here.