Linux / amd64
This container provides the training functionality for the NeMo Customizer ecosystem. It is specifically designed to run customization jobs that fine-tune large language models (LLMs) to fit your specific use cases.
You can balance your compute requirements with performance needs by selecting among a range of models, model sizes and fine-tuning techniques. The NeMo Customizer training image executes the model training tasks and creates models that easily integrate with NVIDIA NIM for LLMs. It is designed to scale with the number of nodes you have available, supporting workloads that train on single nodes or in multi-node, multi-GPU configurations.
Note: Use, distribution or deployment of this microservice in production requires an NVIDIA AI Enterprise License.
The software and materials are governed by the NVIDIA Software License Agreement and the Product-Specific Terms for NVIDIA AI Products.