NGC Catalog
CLASSIC
Welcome Guest
Containers
Holoscan PB May 2025 (PB 25h1)

Holoscan PB May 2025 (PB 25h1)

For copy image paths and more information, please view on a desktop device.
Logo for Holoscan PB May 2025 (PB 25h1)
Associated Products
Features
Description
Holoscan Production Branch May 2025 (PB 25h1) offers a 9-month lifecycle for API stability, with monthly patches for high and critical software vulnerabilities.
Publisher
NVIDIA
Latest Tag
25.03.02
Modified
June 9, 2025
Compressed Size
9.04 GB
Multinode Support
No
Multi-Arch Support
No
25.03.02 (Latest) Security Scan Results

Linux / amd64

Sorry, your browser does not support inline SVG.

Overview

What Is Holoscan?

NVIDIA Holoscan is the the AI sensor processing platform that combines hardware systems for low-latency sensor and network connectivity, optimized libraries for data processing and AI, and core microservices to run streaming, imaging, and other applications, from embedded to edge to cloud. It can be used to build streaming AI pipelines for a variety of domains, including Medical Devices, High Performance Computing at the Edge, Industrial Inspection and more.

What is the Holoscan container?

The Holoscan container includes the Holoscan libraries, GXF extensions, headers, example source code, and sample datasets, as well as all the dependencies that were tested with Holoscan. It is the recommended way to run the Holoscan examples, while still allowing you to create your own C++ and Python Holoscan application.

What Is Holoscan Production Branch May 2025?

The Holoscan Production Branch, exclusively available with NVIDIA AI Enterprise, is a 9-month supported, API-stable branch that includes monthly fixes for high and critical software vulnerabilities. This branch provides a stable and secure environment for building your mission-critical AI applications. The Holoscan production branch releases every six months with a three-month overlap in between two releases. The Holoscan SDK version used for this Production Branch is 3.3.0.

Getting started with Holoscan Production Branch

Before you start, ensure that your environment is set up by following one of the deployment guides available in the NVIDIA IGX Orin Documentation.

Visit the Holoscan User Guide to get started with the Holoscan SDK.


Using the Holoscan container

Prerequisites

Prerequisites for each supported platform are documented in the user guide.

Additionally, you'll need the NVIDIA Container Toolkit version 1.14.1 and Docker.

Running the container

  1. Log in to the NGC docker registry

    docker login nvcr.io
    
  2. Press the Get Container button at the top of this webpage and choose the version you want to use. You can set it as NGC_CONTAINER_IMAGE_PATH in your terminal for the next steps to use:

    # For example
    export NGC_CONTAINER_IMAGE_PATH="nvcr.io/nvaie/holoscan-pb25h1:25.03.02"
    
  3. If using a display, ensure that X11 is configured to allow commands from docker:

    xhost +local:docker
    
  4. Start the container

    docker run -it --rm --net host \
      --runtime=nvidia \
      -e NVIDIA_DRIVER_CAPABILITIES=all \
      -v /tmp/.X11-unix:/tmp/.X11-unix \
      -e DISPLAY=$DISPLAY \
      --ipc=host \
      --cap-add=CAP_SYS_PTRACE \
      --ulimit memlock=-1 \
      --ulimit stack=67108864 \
      ${NGC_CONTAINER_IMAGE_PATH}
    
    • --runtime=nvidia and -e NVIDIA_DRIVER_CAPABILITIES are properties of the nvidia container toolkit to leverage the NVIDIA GPUs and their capabilities. Read more here.
    • -v /tmp/.X11-unix and -e DISPLAY are needed to enable X11 display forwarding.
    • --ipc=host, --cap-add=CAP_SYS_PTRACE, --ulimit memlock=-1 and --ulimit stack=67108864 are required to run distributed applications with UCX. Read more here.

    To expose additional hardware devices from your host to the container, add the --privileged flag to docker run (not secure), or mount their explicit device nodes by adding the flags below:

    • AJA capture card: add --device /dev/ajantv20 (and/or ajantv2<n>).
    • V4L2 video devices: add --device /dev/video0 (and/or video<n>). If configuring a non-root user in the container, add --group-add video or ensure the user has appropriate permissions to the video device nodes (/dev/video*).
    • ConnectX RDMA: add --device /dev/infiniband/rdma_cm and --device /dev/infiniband/uverbs0 (and/or uverbs<n>).
      • This requires the MOFED drivers installed on the host.
      • Needed for RDMA (RoCE or Infiniband). Not required for simple TCP Ethernet communication through a ConnectX SmartNIC.

Using the Holoscan SDK

C++

The Holoscan SDK is installed under /opt/nvidia/holoscan. It includes a CMake configuration file inside lib/cmake/holoscan, allowing you to import holoscan in your CMake project (link libraries + include headers):

find_package(holoscan REQUIRED CONFIG PATHS "/opt/nvidia/holoscan")
target_link_libraries(yourTarget PUBLIC holoscan::core)

Alternatives to hardcoding PATHS inside find_package in CMake are listed under the Config Mode Search Procedure documentation.

Python

For python developers, the PYTHONPATH is already set to include /opt/nvidia/holoscan/python/lib, allowing you to just call import holoscan.

Examples

Python, C++, and GXF examples are installed in /opt/nvidia/holoscan/examples alongside their source code, and run instructions (also available on the GitHub repository).

Running the examples

Example to run the Hello World example:

# Python
python3 /opt/nvidia/holoscan/examples/hello_world/python/hello_world.py

# C++
/opt/nvidia/holoscan/examples/hello_world/cpp/hello_world

Refer to the README in each example folder for specific run instructions.

Building the examples

You can rebuild the C++ and GXF examples as-is or copy them anywhere on your system to experiment with.

Example to build all the C++ and GXF examples:

export src_dir="/opt/nvidia/holoscan/examples/" # Add "<example_of_your_choice>/cpp" to build a specific example
export build_dir="/opt/nvidia/holoscan/examples/build" # Or the path of your choice
cmake -S $src_dir -B $build_dir -D Holoscan_ROOT="/opt/nvidia/holoscan" -G Ninja
cmake --build $build_dir -j

Also see the HoloHub repository for a collection of Holoscan operators and applications which you can use in your pipeline or for reference.


Security Vulnerabilities in Open Source Packages

Please review the Security Scanning tab to view the latest security scan results.

For certain open-source vulnerabilities listed in the scan results, NVIDIA provides a response in the form of a Vulnerability Exploitability eXchange (VEX) document. The VEX information can be reviewed and downloaded from the Security Scanning tab.


Known Issues

This section supplies details about issues discovered during development and QA but not resolved in this release.

Issue Description
4339399 High CPU usage observed with video_replayer_distributed application. While the high CPU usage associated with the GXF UCX extension has been fixed since v1.0, distributed applications using the MultiThreadScheduler (with the check_recession_period_ms parameter set to 0 by default) may still experience high CPU usage. Setting the HOLOSCAN_CHECK_RECESSION_PERIOD_MS environment variable to a value greater than 0 (e.g. 1.5) can help reduce CPU usage. However, this may result in increased latency for the application until the MultiThreadScheduler switches to an event-based multithreaded scheduler.
4318442 UCX cuda_ipc protocol doesn't work in Docker containers on x86_64. As a workaround, we are currently disabling the UCX cuda_ipc protocol on all platforms via the UCX_TLS environment variable.
4384348 UCX termination (either ctrl+c , press 'Esc' or clicking close button) is not smooth and can show multiple error messages.
4481171 Running the driver for a distributed applications on IGX Orin devkits fails when connected to other systems through eth1. A workaround is to use eth0 port to connect to other systems for distributed workloads.
4458192 In scenarios where distributed applications have both the driver and workers running on the same host, either within a Docker container or directly on the host, there's a possibility of encountering "Address already in use" errors. A potential solution is to assign a different port number to the HOLOSCAN_HEALTH_CHECK_PORT environment variable (default: 8777), for example, by using export HOLOSCAN_HEALTH_CHECK_PORT=8780.
4768945 Distributed applications crash when the engine file is unavailable/generating engine file.
4753994 Debugging Python application may lead to segfault when expanding an operator variable.
Wayland: holoscan::viz::Init() with existing GLFW window fails.
4394306 When Python bindings are created for a C++ Operator, it is not always guaranteed that the destructor will be called prior to termination of the Python application. As a workaround to this issue, it is recommended that any resource cleanup should happen in an operator's stop() method rather than in the destructor.
4909073 V4L2 and AJA applications in x86 container report Wayland XDG_RUNTIME_DIR not set error
5211869 Error "Failed to start server on 0.0.0.0:10002" in VScode Debug Distributed Endoscopy Tool Tracking application
5180229 std::bad_alloc exception with Isaac Sim due to loading Holoscan Python module with RTDL_GLOBAL flag
5162855 Dual-GPU Configuration fails Holoscan application
5144233 python-api-tracing-profile failure on Ubuntu 24.04 / Python 3.12
5098866 Segmentation Fault in Python on second .run() call
5061275 Ping distributed multi-nodes failed to create path between two nodes
5052065 v4l2_camera_usb_webcam (python) / v4l2_camera_hdmi_in (python), the captured video streams show OK but report error info "GXF_ENTITY_COMPONENT_NOT_FOUND"
5014059 VScode Debug Distributed Endoscopy Tool Tracking application on an IGX w/dGPU, does not hit the expected debug points.
4953020 The HOLOINFER_TEST is failing, or a segmentation fault occurs during the parseFromFile() call.
4808248 FormatConverterOp: GXF VideoBuffer with format GXF_VIDEO_FORMAT_NV12 is processed in wrong colorspace
4789382 InferenceOp with libtorch backend reports undefined symbols

Get Help

Enterprise Support

Get access to knowledge base articles and support cases. File a Ticket

NVIDIA AI Enterprise Documentation

Learn more about how to deploy NVIDIA AI Enterprise and access more technical information by visiting the documentation hub.

NVIDIA Licensing Portal

Access the NVIDIA Licensing Portal to manage your software licenses.