Linux / arm64
NVIDIA JetPack SDK is the most comprehensive solution for building end-to-end accelerated AI applications. JetPack SDK provides a full development environment for hardware-accelerated AI-at-the-edge development. It includes complete set of libraries for acceleration of GPU computing, multimedia, graphics, and computer vision.
NVIDIA L4T JetPack container containerizes all accelrated libraries that are included in JetPack SDK, which includes CUDA, cuDNN, TensorRT, VPI, Jetson Multimedia, and so on. This container can be used a development container for containerized development as it includes all JetPack SDK components. The docker file for this container can be found at this link. You can refer to the dockerfile and use that recipe as a reference to create your own development container (with both dev and runtime components) or deployment container (with only runtime components)
Ensure that NVIDIA Container Runtime on Jetson is running on Jetson.
You can run this container on top of JetPack SDK installation. Note that NVIDIA Container Runtime is available for install as part of Nvidia JetPack.
You can also run this container on top of Jetson Linux BSP after installing NVIDIA Container Runtime using
sudo apt install nvidia-container
Before running the l4t-jetpack container, use Docker pull to ensure an up-to-date image is installed. Once the pull is complete, you can run the container image.
Procedure:
To run the container:
xhost +
sudo docker run -it --rm --net=host --runtime nvidia -e DISPLAY=$DISPLAY -v /tmp/.X11-unix/:/tmp/.X11-unix nvcr.io/nvidia/l4t-jetpack:r35.3.1
Options explained:
By default a limited set of device nodes and associated functionality is exposed within the cuda-runtime containers using the mount plugin capability. This list is documented here.
User can expose additional devices using the --device command option provided by docker.
Directories and files can be bind mounted using the -v option.
Note that usage of some devices might need associated libraries to be available inside the container.
Once you have successfully launched the l4t-jetpack container, you can run some tests inside it.
To run the CUDA sample test, run the following commands within the container:
Output should indicate that the sample passed.
To run the cuDNN sample test, run the following commands within the container:
Output should indicate that the sample passed.
To run the TensorRT sample test, run the following commands within the container:
Outputs should indicate that the samples passed.
Note: DLA is not supported on Orin Nano
To run the VPI sample test, run the following commands within the container:
Note: VPI currently does not support PVA backend within containers.
By pulling and using the container, you accept the terms and conditions of this End User License Agreement.
For more information on JetPack, including the release notes, programming model, APIs and developer tools, visit the JetPack documentation site.