NGC | Catalog
CatalogContainersHoloscan Container

Holoscan Container

Logo for Holoscan Container
The Holoscan container includes the Holoscan libraries, GXF extensions, headers, example source code, and sample datasets. It is the recommended way to run the Holoscan examples or build your own applications.
Latest Tag
March 1, 2024
Compressed Size
4.43 GB
Multinode Support
Multi-Arch Support
v1.0.3-igpu (Latest) Security Scan Results

Linux / arm64

Sorry, your browser does not support inline SVG.


The Holoscan container is part of NVIDIA Holoscan, the AI sensor processing platform that combines hardware systems for low-latency sensor and network connectivity, optimized libraries for data processing and AI, and core microservices to run streaming, imaging, and other applications, from embedded to edge to cloud. It can be used to build streaming AI pipelines for a variety of domains, including Medical Devices, High Performance Computing at the Edge, Industrial Inspection and more.

In previous releases, the prefix Clara was used to define Holoscan as a platform designed initially for medical devices. As Holoscan has grown, its potential to serve other areas has become apparent. With version 0.4.0, we're proud to announce that the Holoscan SDK is now officially built to be domain-agnostic and can be used to build sensor AI applications in multiple domains. Note that some of the content of the SDK (sample applications) or the documentation might still appear to be healthcare-specific pending additional updates. Going forward, domain specific content will be hosted on the HoloHub repository.

The Holoscan container includes the Holoscan libraries, GXF extensions, headers, example source code, and sample datasets, as well as all the dependencies that were tested with Holoscan. It is the recommended way to run the Holoscan examples, while still allowing you to create your own C++ and Python Holoscan application.

Getting Started

Visit the Holoscan User Guide to get started with the Holoscan SDK.


  • Prerequisites for each supported platform are documented in the user guide.
  • Additionally, on x86_64, you'll need the NVIDIA Container Toolkit version 1.14.1 and Docker. These should already be installed on NVIDIA developer kits with IGX Software or JetPack.

Running the container

  1. Log in to the NGC docker registry

    docker login
  2. Press the Copy Image Path button at the top of this webpage and choose the version you want to test:

    • select v<version>-dgpu for x86_64 systems or a holoscan developer kit configured with a discrete GPU
    • select v<version>-igpu for holoscan developer kits configured with an integrated GPU

    Set it as your NGC_CONTAINER_IMAGE_PATH in your terminal.

    # For example
  3. Ensure that X11 is configured to allow commands from docker:

    xhost +local:docker
  4. Start the container

    docker run -it --rm --net host \
      --runtime=nvidia \
      -v /tmp/.X11-unix:/tmp/.X11-unix \
      --ipc=host \
      --cap-add=CAP_SYS_PTRACE \
      --ulimit memlock=-1 \
      --ulimit stack=67108864 \
    • --runtime=nvidia and -e NVIDIA_DRIVER_CAPABILITIES are properties of the nvidia container toolkit to leverage the NVIDIA GPUs and their capabilities. Read more here.
    • -v /tmp/.X11-unix and -e DISPLAY are needed to enable X11 display forwarding.
    • --ipc=host, --cap-add=CAP_SYS_PTRACE, --ulimit memlock=-1 and --ulimit stack=67108864 are required to run distributed applications with UCX. Read more here.
    • Add --device /dev/ajantv20 (and/or ajantv2<n>) in the docker run command if you have an AJA capture card you'd like to access from the container.
    • Add --device /dev/video0 (and/or video<n>) to make your V4L2 video devices (HDMI IN, USB) available in the container.
      • If configuring a non-root user in the container, add --group-add video or ensure the user has appropriate permissions to the video device nodes (/dev/video*).
      • If using HDMI IN from a developer kit, also add --device /dev/capture-vi-channel<n> (or --privileged to avoid numerous flags)
    • Add --device /dev/infiniband/rdma_cm and --device /dev/infiniband/uverbs0 (and/or uverbs0<n>) to make your ConnectX RDMA interface available in the container.
      • This requires the MOFED drivers installed on the host.
      • Needed for RoCE or Infiniband. Not required for simple TCP Ethernet communication through a ConnectX SmartNIC.
    • If configuring a non-root user in the container, ensure the user has appropriate permissions to the dri device nodes (/dev/dri/*). This can be done by adding --group-add $(cat /etc/group | grep "video" | cut -d: -f3) and --group-add $(cat /etc/group | grep "render" | cut -d: -f3) (Note: simply passing --group-add render might not work if the group id differs from your host and container, even if mounting /etc/group)

Using the installed libraries and headers

The Holoscan SDK is installed under /opt/nvidia/holoscan. It includes a CMake configuration file inside lib/cmake/holoscan, allowing you to import holoscan in your CMake project (link libraries + include headers):

find_package(holoscan REQUIRED CONFIG PATHS "/opt/nvidia/holoscan")
target_link_libraries(yourTarget PUBLIC holoscan::core)

Alternatives to hardcoding PATHS inside find_package in CMake are listed under the Config Mode Search Procedure documentation.


Python, C++, and GXF examples are installed in /opt/nvidia/holoscan/examples alongside their source code, and run instructions (also available on the GitHub repository).

Running the examples

Example to run the Hello World example:

# Python
python3 /opt/nvidia/holoscan/examples/hello_world/python/

# C++
cd /opt/nvidia/holoscan/examples

Make sure to edit any relative path in the yaml config if you want to run from a different working directory.

Building the examples

You can rebuild the C++ and GXF examples as-is or copy them anywhere on your system to experiment with.

Example to build all the C++ and GXF examples:

export src_dir="/opt/nvidia/holoscan/examples/" # Add "<example_of_your_choice>/cpp" to build a specific example
export build_dir="</path/of/your/choice/>"
cmake -S $src_dir -B $build_dir -G Ninja \
  -D Holoscan_ROOT="/opt/nvidia/holoscan"
cmake --build $build_dir -j

Also see the HoloHub repository for a collection of Holoscan operators and applications which you can use in your pipeline or for reference.


By pulling and using the container, you accept the terms and conditions of this End User License Agreement.