Linux / arm64
CUDA is a parallel computing platform and programming model developed by NVIDIA for general computing on graphical processing units (GPUs). With CUDA, developers can dramatically speed up computing applications by harnessing the power of GPUs.
Currently only CUDA runtime container is provided. The CUDA runtime container image is intended to be used as a base image to containerize and deploy CUDA applications on Jetson and includes CUDA runtime and CUDA math libraries included in it; these components does not get mounted from host by NVIDIA container runtime. NVIDIA container rutime still mounts platform specific libraries and select device nodes into the container.
The image is tagged with the version corresponding to the CUDA release version. Based on this, the l4t-cuda:r10.2.460-runtime container is intended to be run on devices running JetPack 4.6 which supports CUDA version 10.2.460
Ensure that NVIDIA Container Runtime on Jetson is running on Jetson.
Note that NVIDIA Container Runtime is available for install as part of Nvidia JetPack
Before running the l4t-cuda runtime container, use Docker pull to ensure an up-to-date image is installed. Once the pull is complete, you can run the container image.
Procedure
To run the container:
xhost +
sudo docker run -it --rm --net=host --runtime nvidia -e DISPLAY=$DISPLAY -v /tmp/.X11-unix/:/tmp/.X11-unix nvcr.io/nvidia/l4t-cuda:r10.2.460-runtime
Option explained:
By default a limited set of device nodes and associated functionality is exposed within the cuda-runtime containers using the mount plugin capability. This list is documented here.
User can expose additional devices using the --device command option provided by docker.
Directories and files can be bind mounted using the -v option.
Note that usage of some devices might need associated libraries to be available inside the container.
Once you have successfully launched the l4t-cuda containers, you run CUDA applicaiton inside it. For example, to run CUDA sampels inside the l4t-cuda runtime container, you can mount the CUDA samples inside the container using -v options (-v ) during "docker run" and then run the CUDA samples from within the container.
The images are governed by the following NVIDIA End User License Agreements. By pulling and using the CUDA images, you accept the terms and conditions of these licenses. Since the images may include components licensed under open-source licenses such as GPL, the sources for these components are archived here.
To view the NVIDIA Deep Learning Container license, click here
For more information on CUDA, including the release notes, programming model, APIs and developer tools, visit the CUDA documentation site.