NGC Catalog
CLASSIC
Welcome Guest
Containers
Holoscan LTSB2

Holoscan LTSB2

For copy image paths and more information, please view on a desktop device.
Logo for Holoscan LTSB2
Associated Products
Features
Description
Holoscan LTSB2, part of NVIDIA AI Enterprise, the AI sensor processing platform.
Publisher
NVIDIA
Latest Tag
23.10.08-lws2.0.11-dgpu
Modified
February 27, 2025
Compressed Size
7.56 GB
Multinode Support
No
Multi-Arch Support
Yes
23.10.08-lws2.0.11-dgpu (Latest) Security Scan Results

Linux / amd64

Sorry, your browser does not support inline SVG.

Linux / arm64

Sorry, your browser does not support inline SVG.

What Is Holoscan?

The Holoscan container is part of NVIDIA Holoscan, the AI sensor processing platform that combines hardware systems for low-latency sensor and network connectivity, optimized libraries for data processing and AI, and core microservices to run streaming, imaging, and other applications, from embedded to edge to cloud. It can be used to build streaming AI pipelines for a variety of domains, including Medical Devices, High Performance Computing at the Edge, Industrial Inspection and more.

The Holoscan container includes the Holoscan libraries, GXF extensions, headers, example source code, and sample datasets, as well as all the dependencies that were tested with Holoscan. It is the recommended way to run the Holoscan examples, while still allowing you to create your own C++ and Python Holoscan application.

Getting started with IGX Holoscan LTSB2

Before you start, ensure that your environment is set up by following one of the deployment guides available in the NVIDIA IGX Orin Documentation

Visit the Holoscan User Guide to get started with the Holoscan SDK

Holoscan Prerequisites

Prerequisites for each supported platform are documented in the user guide.

On x86_64, you'll need the NVIDIA Container Toolkit version 1.14.1 and Docker. These should already be installed as part of IGX SW 1.0+.

Running the container

  1. Log in to the NGC docker registry

    docker login nvcr.io
    
  2. Press the Get Container button at the top of this webpage and choose the version you want to use. You can set it as NGC_CONTAINER_IMAGE_PATH in your terminal for the next steps to use:

    # For example
    export NGC_CONTAINER_IMAGE_PATH="nvcr.io/nvaie/holoscan-ltsb2:23.10.08-lws2.0.11-dgpu"
    
  3. If using a display, ensure that X11 is configured to allow commands from docker:

    xhost +local:docker
    
  4. Start the container

    docker run -it --rm --net host \
      --runtime=nvidia \
      -e NVIDIA_DRIVER_CAPABILITIES=all \
      -v /tmp/.X11-unix:/tmp/.X11-unix \
      -e DISPLAY=$DISPLAY \
      --ipc=host \
      --cap-add=CAP_SYS_PTRACE \
      --ulimit memlock=-1 \
      --ulimit stack=67108864 \
      ${NGC_CONTAINER_IMAGE_PATH}
    
    • --runtime=nvidia and -e NVIDIA_DRIVER_CAPABILITIES are properties of the nvidia container toolkit to leverage the NVIDIA GPUs and their capabilities. Read more here.
    • -v /tmp/.X11-unix and -e DISPLAY are needed to enable X11 display forwarding.
    • --ipc=host, --cap-add=CAP_SYS_PTRACE, --ulimit memlock=-1 and --ulimit stack=67108864 are required to run distributed applications with UCX. Read more here.

    To expose additional hardware devices from your host to the container, add the --privileged flag to docker run (not secure), or mount their explicit device nodes by adding the flags below:

    • AJA capture card: add --device /dev/ajantv20 (and/or ajantv2<n>).
    • V4L2 video devices (HDMI IN, USB): add --device /dev/video0 (and/or video<n>)
      • If configuring a non-root user in the container, add --group-add video or ensure the user has appropriate permissions to the video device nodes (/dev/video*).
      • If using HDMI IN from a developer kit, also add --device /dev/capture-vi-channel0 to access the Tegra Video Input channels. You might need to add more nodes (with the last digit increasing) depending on the number of channels needed.
    • ConnectX RDMA: add --device /dev/infiniband/rdma_cm and --device /dev/infiniband/uverbs0 (and/or uverbs<n>).
      • This requires the MOFED drivers installed on the host.
      • Needed for RDMA (RoCE or Infiniband). Not required for simple TCP Ethernet communication through a ConnectX SmartNIC.

    If configuring a non-root user in the container, ensure the user has appropriate permissions to the dri device nodes (/dev/dri/*). This can be done by adding --group-add $(cat /etc/group | grep "video" | cut -d: -f3) and --group-add $(cat /etc/group | grep "render" | cut -d: -f3) (Note: simply passing --group-add render might not work if the group id differs between your host and container, even if mounting /etc/group)

Using the installed libraries and headers

The Holoscan SDK is installed under /opt/nvidia/holoscan. It includes a CMake configuration file inside lib/cmake/holoscan, allowing you to import holoscan in your CMake project (link libraries + include headers):

find_package(holoscan REQUIRED CONFIG PATHS "/opt/nvidia/holoscan")
target_link_libraries(yourTarget PUBLIC holoscan::core)

Alternatives to hardcoding PATHS inside find_package in CMake are listed under the Config Mode Search Procedure documentation.

Examples

Python, C++, and GXF examples are installed in /opt/nvidia/holoscan/examples alongside their source code, and run instructions (also available on the GitHub repository).

Running the examples

Example to run the Hello World example:

# Python
python3 /opt/nvidia/holoscan/examples/hello_world/python/hello_world.py


# C++
cd /opt/nvidia/holoscan/examples
./hello_world/cpp/hello_world

Make sure to edit any relative path in the yaml config if you want to run from a different working directory.

Building the examples

You can rebuild the C++ and GXF examples as-is or copy them anywhere on your system to experiment with.

Example to build all the C++ and GXF examples:


export src_dir="/opt/nvidia/holoscan/examples/" # Add "<example_of_your_choice>/cpp" to build a specific example
export build_dir="</path/of/your/choice/>"

cmake -S $src_dir -B $build_dir -G Ninja \
  -D Holoscan_ROOT="/opt/nvidia/holoscan"
cmake --build $build_dir -j

Also see the HoloHub repository for a collection of Holoscan operators and applications which you can use in your pipeline or for reference.

Security Vulnerabilities in Open Source Packages

Please review the Security Scanning tab to view the latest security scan results.

For certain open-source vulnerabilities listed in the scan results, NVIDIA provides a response in the form of a Vulnerability Exploitability eXchange (VEX) document. The VEX information can be reviewed and downloaded from the Security Scanning tab.

Get Help

Enterprise Support

Get access to knowledge base articles and support cases. File a Ticket

NVIDIA AI Enterprise Documentation

Learn more about how to deploy NVIDIA AI Enterprise and access more technical information by visiting the documentation hub.

NVIDIA Licensing Portal

Access the NVIDIA Licensing Portal to manage your software licenses.