Linux / amd64
Linux / arm64
This is a collection of containers to run CUDA workloads on the GPUs. The collection includes containerized CUDA samples for example, vectorAdd
(to demonstrate vector addition), nbody
(or gravitational n-body simulation) and other examples. These containers can be used for validating the software configuration of GPUs in the system or simply to run some example workloads.
The vectorAdd
example is used by the NVIDIA GPU Operator as part of its self-validation.
The containers can be run on the Docker command-line or in Kubernetes pod specs. For example, in Kubernetes use the following podspec:
cat << EOF | kubectl create -f -
apiVersion: v1
kind: Pod
metadata:
name: vectorAdd
spec:
restartPolicy: OnFailure
containers:
- name: vectorAdd
image: nvcr.io/nvidia/k8s/cuda-sample:vectoradd-cuda10.2
resources:
limits:
nvidia.com/gpu: 1
EOF
or run on the Docker CLI:
docker run --rm --gpus all nvcr.io/nvidia/k8s/cuda-sample:vectoradd-cuda10.2
The containers are licensed under Apache 2.0.
This product is supported when deployed by the NVIDIA GPU Operator.