NVIDIA NIM, part of NVIDIA AI Enterprise, is a set of easy-to-use microservices designed to accelerate deployment of generative AI across cloud, data center, and workstations.
Benefits of self-hosted NIMs:
The Stable Diffusion XL NIM is a container that allows you to run Stability.AI’s Stable Diffusion XL model - one of the most popular visual generative AI models in the world - in the most optimal manner.
Note that the container does not contain the model. The developer must source the license for the model directly from Stability.AI’s webpage and download the TRT-optimized model from HuggingFace.
The NIM has all the instructions and tools needed to bring in the model and generate the required TensorRT engines to run the model in an optimized manner for your target NVIDIA GPUs.
You can try the Stable Diffusion XL NIM on build.nvidia.com, either on the website or through the demo API.
Access all software artifacts for a self-hosted instant of Stable Diffusion XL 1.0 NIM in the Entities tab of this collection.
For optimal performance, deploy the supported NVIDIA AI Enterprise Infrastructure software with this NIM.
Before you start, ensure that your environment is set up by following one of the deployment guides available in the NVIDIA AI Enterprise Documentation.
For a comprehensive collection of resources on Stable Diffusion XL NIM, including tutorials, documentation, and examples, visit the following links:
Get access to knowledge base articles and support cases or submit a ticket.
Visit the NVIDIA AI Enterprise Documentation Hub for release documentation, deployment guides and more.
Go to the NVIDIA Licensing Portal to manage your software licenses. Get Your Licenses
This NIM is licensed under the NVIDIA AI Product Agreement. By downloading and using the artifacts in this collection, you accept the terms and conditions of this license.