NGC Catalog
CLASSIC
Welcome Guest
Containers
DeepSeek-Coder-V2-Lite-Instruct

DeepSeek-Coder-V2-Lite-Instruct

For copy image paths and more information, please view on a desktop device.
Associated Products
Features
Description
This container houses the DeepSeek-Coder-V2-Lite-Instruct, an open-source MoE code model for generating, completing, and fixing code in many languages from natural language prompts, with strong mathematical reasoning.
Publisher
NVIDIA
Latest Tag
1
Modified
July 23, 2025
Compressed Size
10.13 GB
Multinode Support
No
Multi-Arch Support
Yes
1 (Latest) Security Scan Results

Linux / amd64

Sorry, your browser does not support inline SVG.

Linux / arm64

Sorry, your browser does not support inline SVG.

DeepSeek-Coder-V2-Lite-Instruct Overview

Description:

This container houses the DeepSeek-Coder-V2-Lite-Instruct, which is a powerful and efficient open-source Mixture-of-Experts (MoE) code language model that generates and understands code in a vast number of programming languages. It is designed to handle a wide range of coding tasks, including code completion, bug fixing, and generating complex code snippets from natural language prompts, and also possesses strong mathematical reasoning capabilities.

The container components are ready for commercial/non-commercial use.

Third-Party Community Consideration

This model is not owned or developed by NVIDIA. This model has been developed and built to a third-party's requirements for this application and use case; see link to Non-NVIDIA [deepseek-ai/DeepSeek-Coder-V2-Lite-Instruct]
DeepSeek-Coder-V2-Lite-Instruct Model Card.

License/Terms of Use:

GOVERNING TERMS: The NIM container is governed by the NVIDIA Software License Agreement and the Product-Specific Terms for NVIDIA AI Products; and the use of this model is governed by the NVIDIA Community Model License Agreement;

ADDITIONAL INFORMATION: DeepSeek-Coder-V2 LICENSE.

You are responsible for ensuring that your use of the NVIDIA community models complies with all applicable laws.

Deployment Geography:

Global

Release Date:

Github 06/17/2024 via
https://github.com/deepseek-ai/DeepSeek-Coder-V2

Huggingface 07/18/2024 via
https://huggingface.co/deepseek-ai/DeepSeek-Coder-V2-Lite-Instruct

DeepSeek-Coder-V2-Lite-Instruct

DeepSeek-Coder-V2-Lite-Instruct Container includes the following model:

Model Name & Link Use Case How to Pull the Model
DeepSeek-Coder-V2-Lite-Instruct The model is used for code generation, completion, and instruction-following across 338 programming languages. It serves as a powerful tool for software developers to accelerate their workflow, debug code, and solve complex algorithmic problems. Manual

Deployment Details:

Visit the NIM Container LLM page for release documentation, deployment guides, and more.

Our AI models are designed and/or optimized to run on NVIDIA GPU-accelerated systems. By leveraging NVIDIA’s hardware (e.g. GPU cores) and software frameworks (e.g., CUDA libraries), the model achieves faster training and inference times compared to CPU-only solutions.

Container Version(s):

nvcr.io/nvstaging/nim/deepseek-coder-v2-lite-instruct:1.10.1-31076547

Ethical Considerations:

NVIDIA believes Trustworthy AI is a shared responsibility and we have established policies and practices to enable development for a wide array of AI applications. When downloaded or used in accordance with our terms of service, developers should work with their internal developer team to ensure this model meets requirements for the relevant industry and use case and addresses unforeseen product misuse

Please report security vulnerabilities or NVIDIA AI Concerns here.