Linux / amd64
Linux / arm64
This container houses the DeepSeek-Coder-V2-Lite-Instruct, which is a powerful and efficient open-source Mixture-of-Experts (MoE) code language model that generates and understands code in a vast number of programming languages. It is designed to handle a wide range of coding tasks, including code completion, bug fixing, and generating complex code snippets from natural language prompts, and also possesses strong mathematical reasoning capabilities.
The container components are ready for commercial/non-commercial use.
This model is not owned or developed by NVIDIA. This model has been developed and built to a third-party's requirements for this application and use case; see link to Non-NVIDIA [deepseek-ai/DeepSeek-Coder-V2-Lite-Instruct]
DeepSeek-Coder-V2-Lite-Instruct Model Card.
GOVERNING TERMS: The NIM container is governed by the NVIDIA Software License Agreement and the Product-Specific Terms for NVIDIA AI Products; and the use of this model is governed by the NVIDIA Community Model License Agreement;
ADDITIONAL INFORMATION: DeepSeek-Coder-V2 LICENSE.
You are responsible for ensuring that your use of the NVIDIA community models complies with all applicable laws.
Global
Github 06/17/2024 via
https://github.com/deepseek-ai/DeepSeek-Coder-V2
Huggingface 07/18/2024 via
https://huggingface.co/deepseek-ai/DeepSeek-Coder-V2-Lite-Instruct
DeepSeek-Coder-V2-Lite-Instruct
DeepSeek-Coder-V2-Lite-Instruct Container includes the following model:
Model Name & Link | Use Case | How to Pull the Model |
---|---|---|
DeepSeek-Coder-V2-Lite-Instruct | The model is used for code generation, completion, and instruction-following across 338 programming languages. It serves as a powerful tool for software developers to accelerate their workflow, debug code, and solve complex algorithmic problems. | Manual |
Deployment Details:
Visit the NIM Container LLM page for release documentation, deployment guides, and more.
Our AI models are designed and/or optimized to run on NVIDIA GPU-accelerated systems. By leveraging NVIDIA’s hardware (e.g. GPU cores) and software frameworks (e.g., CUDA libraries), the model achieves faster training and inference times compared to CPU-only solutions.
nvcr.io/nvstaging/nim/deepseek-coder-v2-lite-instruct:1.10.1-31076547
Ethical Considerations:
NVIDIA believes Trustworthy AI is a shared responsibility and we have established policies and practices to enable development for a wide array of AI applications. When downloaded or used in accordance with our terms of service, developers should work with their internal developer team to ensure this model meets requirements for the relevant industry and use case and addresses unforeseen product misuse
Please report security vulnerabilities or NVIDIA AI Concerns here.