NGC | Catalog
CatalogContainersL4T Compute Assist

L4T Compute Assist

For copy image paths and more information, please view on a desktop device.
Logo for L4T Compute Assist

Description

This container allows running compute-only applications using CUDA and TensorRT on the integrated GPU of Holoscan devkits (Clara AGX DevKit, IGX Orin DevKit) while other applications can run on dGPU, natively or in another container.

Publisher

NVIDIA

Latest Tag

l4t_35.3-trt_8.5.2

Modified

June 5, 2023

Compressed Size

2.24 GB

Multinode Support

No

Multi-Arch Support

No

l4t_35.3-trt_8.5.2 (Latest) Scan Results

Linux / arm64

Overview

The L4T Compute Assist container is part of NVIDIA Holoscan, the AI sensor processing platform that combines hardware systems for low-latency sensor and network connectivity, optimized libraries for data processing and AI, and core microservices to run streaming, imaging, and other applications, from embedded to edge to cloud. It can be used to build streaming AI pipelines for a variety of domains, including Medical Devices, High Performance Computing at the Edge, Industrial Inspection and more.

This container allows running compute-only applications using CUDA and TensorRT on the integrated GPU (iGPU) of the Holoscan DevKits while other applications can run on a discrete (dGPU), natively or in another container.

For a full list of Holoscan documentation, visit the Holoscan developer page.

Prerequisites

Note: Make sure you have joined the Holoscan SDK Program and, if needed, the Rivermax SDK Program before using the NVIDIA SDK Manager.

  1. Set up your developer kit in dGPU mode:

    Developer Kit User Guide HoloPack
    NVIDIA IGX Orin Coming Soon 2.0
    NVIDIA IGX Orin [ES] Guide 1.2
    NVIDIA Clara AGX Guide 1.2
  2. Load the nvgpu kernel driver for iGPU:

    sudo insmod $(find /usr/lib/modules -name nvgpu.ko -type f,l | head -n1)
    
  3. Install the Holoscan SDK

  4. Either:

Running the container

  1. Log in to the NGC docker registry

  2. Copy the version you want to use from the Get Container drop-down at the top of this webpage, and set this as your NGC_CONTAINER_IMAGE_PATH in your terminal:

    • for HoloPack 2.0, use l4t_35.3-trt_8.5.2
    • for HoloPack 1.2, use l4t_34.1.2-trt_8.4.0
    # For example
    export NGC_CONTAINER_IMAGE_PATH="nvcr.io/nvidia/clara-holoscan/l4t-compute-assist:l4t_35.3-trt_8.5.2"
    
  3. Start the L4T Compute Assist container:

    export HOLOSCAN_SDK_INSTALL_PATH="/opt/holoscan/nvidia" # choose other path if installed somewhere else on your host
    export APP_PATH="/path/to/your_app"
    export DATA_PATH="/path/to/your_data" # if needed by your_app
    docker run -it --rm --net=host --privileged --runtime=runc \
      -v ${HOLOSCAN_SDK_INSTALL_PATH}:${HOLOSCAN_SDK_INSTALL_PATH}:ro \
      -e PYTHONPATH=${HOLOSCAN_SDK_INSTALL_PATH}/python/lib \
      -v ${APP_PATH}:${APP_PATH}:ro \
      -v ${DATA_PATH}:${DATA_PATH}:ro \
      ${NGC_CONTAINER_IMAGE_PATH}
    
    • --privileged is to run with privileged permissions to access the iGPU driver.
    • --runtime=runc is to ensure you are not using the nvidia docker runtime which would load dGPU drivers instead of iGPU. If this does not work, remove the nvidia runtime from your defaults in /etc/docker/daemon.json (you'll need to add --runtime=nvidia when running your other containers for dGPU).
    • HOLOSCAN_SDK_INSTALL_PATH is the path to where you have installed the Holoscan SDK on your host.
      • Debian package: /opt/holoscan/nvidia
      • Python wheel: in your environment's dist-packages/holoscan
      • If you want to mount the SDK from a Holoscan dGPU container instead of your devkit host:
        • add --name holoscan_dgpu -v /opt/holoscan/nvidia when running your dGPU container
        • replace -v ${HOLOSCAN_SDK_INSTALL_PATH} ... by --volume-from holoscan_dgpu when running the L4T Compute Assist container
    • -e PYTHONPATH is set for python3 to find the Holoscan python module if it is not installed from a wheel.
    • APP_PATH is the path of your built Holoscan application on your host.
    • DATA_PATH is the path of any data on your host which you might need to run your application.
  4. Once in the container, you can run ${APP_PATH}, or any other commands to leverage CUDA and/or TensorRT on the iGPU.

Tips:

  • You can run deviceQuery inside the container to confirm you are using the iGPU device (Xavier/Orin) and CUDA drivers (11.4).
  • You can run tegrastats on the host to visualize the iGPU load when running an app in the container (GR3D_FREQ percentage).

Troubleshooting

deviceQuery: no CUDA-capable device is detected

Ensure that the nvgpu kernel driver is loaded (see prerequisites).

tegrastats: does not show GR3D_FREQ

Ensure that the nvgpu kernel driver is loaded (see prerequisites).

deviceQuery: returns "Quadro RTX" capable device (dGPU), 11.6 CUDA drivers

Ensure that you're not using the nvidia container runtime (refer to --runtime=runc instructions above).

tegrastats: GR3D_FREQ is at 0%

Ensure that:

  • you're not using the nvidia container runtime (refer to --runtime=runc instructions above)
  • your app is running in the iGPU container
  • your app uses CUDA for GPU computation

License

By pulling and using the container, you accept the terms and conditions of this End User License Agreement.