Linux / arm64
The L4T Compute Assist container is part of NVIDIA Holoscan, the AI sensor processing platform that combines hardware systems for low-latency sensor and network connectivity, optimized libraries for data processing and AI, and core microservices to run streaming, imaging, and other applications, from embedded to edge to cloud. It can be used to build streaming AI pipelines for a variety of domains, including Medical Devices, High Performance Computing at the Edge, Industrial Inspection and more.
This container allows running compute-only applications using CUDA and TensorRT on the integrated GPU (iGPU) of the Holoscan DevKits while other applications can run on a discrete (dGPU), natively or in another container.
For a full list of Holoscan documentation, visit the Holoscan developer page.
Note: Make sure you have joined the Holoscan SDK Program and, if needed, the Rivermax SDK Program before using the NVIDIA SDK Manager.
Set up your developer kit in dGPU mode:
Developer Kit | User Guide | HoloPack |
---|---|---|
NVIDIA IGX Orin | Coming Soon | 2.0 |
NVIDIA IGX Orin [ES] | Guide | 1.2 |
NVIDIA Clara AGX | Guide | 1.2 |
Load the nvgpu kernel driver for iGPU:
sudo insmod $(find /usr/lib/modules -name nvgpu.ko -type f,l | head -n1)
Either:
Copy the version you want to use from the Get Container
drop-down at the top of this webpage, and set this as your NGC_CONTAINER_IMAGE_PATH
in your terminal:
l4t_35.3-trt_8.5.2
l4t_34.1.2-trt_8.4.0
# For example
export NGC_CONTAINER_IMAGE_PATH="nvcr.io/nvidia/clara-holoscan/l4t-compute-assist:l4t_35.3-trt_8.5.2"
Start the L4T Compute Assist container:
export HOLOSCAN_SDK_INSTALL_PATH="/opt/holoscan/nvidia" # choose other path if installed somewhere else on your host
export APP_PATH="/path/to/your_app"
export DATA_PATH="/path/to/your_data" # if needed by your_app
docker run -it --rm --net=host --privileged --runtime=runc \
-v ${HOLOSCAN_SDK_INSTALL_PATH}:${HOLOSCAN_SDK_INSTALL_PATH}:ro \
-e PYTHONPATH=${HOLOSCAN_SDK_INSTALL_PATH}/python/lib \
-v ${APP_PATH}:${APP_PATH}:ro \
-v ${DATA_PATH}:${DATA_PATH}:ro \
${NGC_CONTAINER_IMAGE_PATH}
--privileged
is to run with privileged permissions to access the iGPU driver.--runtime=runc
is to ensure you are not using the nvidia docker runtime which would load dGPU drivers instead of iGPU. If this does not work, remove the nvidia runtime from your defaults in /etc/docker/daemon.json
(you'll need to add --runtime=nvidia
when running your other containers for dGPU).HOLOSCAN_SDK_INSTALL_PATH
is the path to where you have installed the Holoscan SDK on your host./opt/holoscan/nvidia
dist-packages/holoscan
--name holoscan_dgpu -v /opt/holoscan/nvidia
when running your dGPU container-v ${HOLOSCAN_SDK_INSTALL_PATH} ...
by --volume-from holoscan_dgpu
when running the L4T Compute Assist container-e PYTHONPATH
is set for python3
to find the Holoscan python module if it is not installed from a wheel.APP_PATH
is the path of your built Holoscan application on your host.DATA_PATH
is the path of any data on your host which you might need to run your application.Once in the container, you can run ${APP_PATH}
, or any other commands to leverage CUDA and/or TensorRT on the iGPU.
Tips:
deviceQuery
inside the container to confirm you are using the iGPU device (Xavier/Orin) and CUDA drivers (11.4).tegrastats
on the host to visualize the iGPU load when running an app in the container (GR3D_FREQ
percentage).
deviceQuery
: no CUDA-capable device is detected
Ensure that the nvgpu kernel driver is loaded (see prerequisites).
tegrastats
: does not showGR3D_FREQ
Ensure that the nvgpu kernel driver is loaded (see prerequisites).
deviceQuery
: returns "Quadro RTX" capable device (dGPU), 11.6 CUDA drivers
Ensure that you're not using the nvidia container runtime (refer to --runtime=runc
instructions above).
tegrastats
:GR3D_FREQ
is at0%
Ensure that:
--runtime=runc
instructions above)By pulling and using the container, you accept the terms and conditions of this End User License Agreement.