NGC | Catalog
CatalogContainersHoloscan Container

Holoscan Container

For copy image paths and more information, please view on a desktop device.
Logo for Holoscan Container

Description

The Holoscan container includes the built Holoscan libraries, GXF extensions, headers, example source code, and sample datasets. It is the simplest way to run sample streaming applications or create your own application using the Holoscan SDK.

Publisher

NVIDIA

Latest Tag

v0.4.0

Modified

February 1, 2023

Compressed Size

5.91 GB

Multinode Support

No

Multi-Arch Support

Yes

v0.4.0 (Latest) Scan Results

Linux / amd64

Linux / arm64

Overview

The Holoscan container is part of NVIDIA Holoscan, the AI sensor processing platform that combines hardware systems for low-latency sensor and network connectivity, optimized libraries for data processing and AI, and core microservices to run streaming, imaging, and other applications, from embedded to edge to cloud. It can be used to build streaming AI pipelines for a variety of domains, including Medical Devices, High Performance Computing at the Edge, Industrial Inspection and more.

In previous releases, the prefix Clara was used to define Holoscan as a platform designed initially for medical devices. As Holoscan has grown, its potential to serve other areas has become apparent. With version 0.4.0, we're proud to announce that the Holoscan SDK is now officially built to be domain-agnostic and can be used to build sensor AI applications in multiple domains. Note that some of the content of the SDK (sample applications) or the documentation might still appear to be healthcare-specific pending additional updates. Going forward, domain specific content will be hosted on the HoloHub repository.

The Holoscan container includes the Holoscan libraries, GXF extensions, headers, example source code, and sample datasets. It is the simplest way to run sample streaming applications or create your own application using the Holoscan SDK.

Visit the NGC demo website for a live demonstration of some of Holoscan capabilities.

Prerequisites

The Holoscan container is designed to run on any of the Holoscan Developer Kits (aarch64) as well as x86_64 systems.

For a full list of Holoscan documentation, visit the Holoscan developer page.

For Clara AGX and NVIDIA IGX Orin Developer Kits (aarch64)

Set up your developer kit:

Make sure you have joined the Holoscan SDK Program and, if needed, the RiverMax SDK Program before using the NVIDIA SDK Manager.

SDK Manager will install Holopack 1.1 as well as the nvgpuswitch.py script. Once configured for dGPU mode, your developer kit will include the following necessary components to run the container:

Refer to the User Guide for additional steps to support the AJA capture card.

For x86_64 systems

You'll need the following to run the container on x86_64:

Running the container

  1. Log in to the NGC docker registry

  2. Press the Copy Image Path button at the top of this webpage, choose the version you want to test, and set this as your NGC_CONTAINER_IMAGE_PATH in your terminal

    # For example
    export NGC_CONTAINER_IMAGE_PATH="nvcr.io/nvidia/clara-holoscan/holoscan:v0.4.0"
    
  3. Ensure that X11 is configured to allow commands from docker:

    xhost +local:docker
    
  4. Start the container

    Add --device /dev/ajantv20:/dev/ajantv20 in the docker run command if you also have an AJA capture card you'd like to access from the container. Similarly, add --device /dev/video0:/dev/video0 (and/or video1, etc...) to make your USB cameras available to the V4L2 codelet in the container.

    # Find the nvidia_icd.json file which could reside at different paths
    # Needed due to https://github.com/NVIDIA/nvidia-container-toolkit/issues/16
    nvidia_icd_json=$(find /usr/share /etc -path '*/vulkan/icd.d/nvidia_icd.json' -type f 2>/dev/null | grep .) || (echo "nvidia_icd.json not found" >&2 && false)
    
    # Run the container
    docker run -it --rm --net host \
      --runtime=nvidia \
      -v /tmp/.X11-unix:/tmp/.X11-unix \
      -v $nvidia_icd_json:$nvidia_icd_json:ro \
      -e NVIDIA_DRIVER_CAPABILITIES=graphics,video,compute,utility,display \
      -e DISPLAY=$DISPLAY \
      ${NGC_CONTAINER_IMAGE_PATH}
    

Running sample applications

The Holoscan container includes the applications below. C++ applications currently need to be run from /opt/nvidia/holoscan to resolve the paths to the datasets. Python applications can run from any working directory since HOLOSCAN_SAMPLE_DATA_PATH is set as an environment variable in the container.

Endoscopy Tool Tracking

Based on a LSTM (long-short term memory) stateful model, these applications demonstrate the use of custom components for tool tracking, including composition and rendering of text, tool position, and mask (as heatmap) combined with the original video stream.

Requirements

The provided applications are configured to either use the AJA capture card for input stream, or a pre-recorded endoscopy video (replayer). Follow the setup instructions from the user guide to use the AJA capture card.

Data

📦️ (NGC) Sample App Data for AI-based Endoscopy Tool Tracking

Run Instructions

  • Using a pre-recorded video

    # C++
    sed -i -e 's#^source:.*#source: replayer#' ./apps/endoscopy_tool_tracking/cpp/app_config.yaml \
      && ./apps/endoscopy_tool_tracking/cpp/endoscopy_tool_tracking
    
    # Python
    python3 ./apps/endoscopy_tool_tracking/python/endoscopy_tool_tracking.py --source=replayer
    
  • Using an AJA card

    # C++
    sed -i -e 's#^source:.*#source: aja#' ./apps/endoscopy_tool_tracking/cpp/app_config.yaml \
      && ./apps/endoscopy_tool_tracking/cpp/endoscopy_tool_tracking
    
    # Python
    python3 ./apps/endoscopy_tool_tracking/python/endoscopy_tool_tracking.py --source=aja
    

Ultrasound Bone Scoliosis Segmentation

Full workflow including a generic visualization of segmentation results from a spinal scoliosis segmentation model of ultrasound videos. The model used is stateless, so this workflow could be configured to adapt to any vanilla DNN model.

Requirements

The provided applications are configured to either use the AJA capture card for input stream, or a pre-recorded video of the ultrasound data (replayer). Follow the setup instructions from the user guide to use the AJA capture card.

Data

📦️ (NGC) Sample App Data for AI-based Bone Scoliosis Segmentation

Run Instructions

  • Using a pre-recorded video

    # C++
    sed -i -e 's#^source:.*#source: replayer#' ./apps/ultrasound_segmentation/cpp/app_config.yaml \
      && ./apps/ultrasound_segmentation/cpp/ultrasound_segmentation
    
    # Python
    python3 ./apps/ultrasound_segmentation/python/ultrasound_segmentation.py --source=replayer
    
  • Using an AJA card

    # C++
    sed -i -e 's#^source:.*#source: aja#' ./apps/ultrasound_segmentation/cpp/app_config.yaml \
      && ./apps/ultrasound_segmentation/cpp/ultrasound_segmentation
    
    # Python
    python3 ./apps/ultrasound_segmentation/python/ultrasound_segmentation.py --source=aja
    

Multi-AI Ultrasound

This application demonstrates how to run multiple inference pipelines in a single application by leveraging the Holoscan Inference module, a framework that facilitates designing and executing inference applications in the Holoscan SDK.

The Multi AI operators (inference and postprocessor) use APIs from the Holoscan Inference module to extract data, initialize and execute the inference workflow, process, and transmit data for visualization.

The applications uses models and echocardiogram data from iCardio.ai. The models include:

  • a Plax chamber model, that identifies four critical linear measurements of the heart
  • a Viewpoint Classifier model, that determines confidence of each frame to known 28 cardiac anatomical view as defined by the guidelines of the American Society of Echocardiography
  • an Aortic Stenosis Classification model, that provides a score which determines likeability for the presence of aortic stenosis

Requirements

The provided applications are configured to either use the AJA capture card for input stream, or a pre-recorded video of the echocardiogram (replayer). Follow the setup instructions from the user guide to use the AJA capture card.

Data

📦️ (NGC) Sample App Data for Multi-AI Ultrasound Pipeline

Run Instructions

  • Using a pre-recorded video

    # C++
    sed -i -e 's#^source:.*#source: replayer#' ./apps/multiai/cpp/app_config.yaml \
      && ./apps/multiai/cpp/multiai
    
    # Python
    python3 ./apps/multiai/python/multiai.py --source=replayer
    
  • Using an AJA card

    # C++
    sed -i -e 's#^source:.*#source: aja#' ./apps/multiai/cpp/app_config.yaml \
      && ./apps/multiai/cpp/multiai
    
    # Python
    python3 ./apps/multiai/python/multiai.py --source=aja
    

Running examples

The Holoscan container includes the examples below to showcase specific features of the Holoscan SDK. C++ and GXF examples currently need to be ran from /opt/nvidia/holoscan to resolve the paths to the datasets. Python applications can run from any working directory.

Basic Workflow

Minimal example to demonstrate the use of adding components in a pipeline. The workflow in the example tracks tools in the endoscopy video sample data.

Data

📦️ (NGC) Sample App Data for AI-based Endoscopy Tool Tracking

Run instructions

# C++
./examples/basic_workflow/cpp/basic_workflow

# Python
python3 ./examples/basic_workflow/python/basic_workflow.py

Bring Your Own Model - Colonoscopy

This example shows how to use the Bring Your Own Model (BYOM) concept for Holoscan by changing a few properties of the ultrasound_segmentation app to run a segmentation of polyps from a colonoscopy video input instead.

Data

📦️ (NGC) Sample App Data for AI Colonoscopy Segmentation of Polyps

Run instructions

# Update the configurations (run again to reverse)
patch -ub -p0 -i examples/bring_your_own_model/python/colonoscopy_segmentation.patch
# Run the application
python3 ./apps/ultrasound_segmentation/python/ultrasound_segmentation.py

Native Operator

These examples demonstrate how to use native operators (the operators that do not have an underlying, pre-compiled GXF Codelet):

  • native_operator (C++): This example shows the application using only native operators. There are three operators involved:
    1. a transmitter, set to transmit a sequence of even integers on port out1 and odd integers on port out2.
    2. a middle operator that prints the received values, multiplies by a scalar and transmits the modified values
    3. a receiver that prints the received values to the terminal
  • ping.py: This example is similar to the C++ native operator example, using Python.
  • convolve.py: This example demonstrates a simple 1D convolution-based signal processing application, to demonstrate passing NumPy arrays between operators as Python objects.

Run instructions

# C++
./examples/native_operator/cpp/ping

# Python
python3 ./examples/native_operator/python/ping.py
python3 ./examples/native_operator/python/convolve.py

C++ Tensor interoperability

This application demonstrates interoperability between a native operator (ProcessTensorOp) and two GXF Codelets (SendTensor and ReceiveTensor).

  • The input and output ports are of type holoscan::gxf::Entity so that this operator can talk directly to the GXF codelets which send/receive GXF entities.
  • The input/output of the entity has a tensor (holoscan::Tensor which is converted to holoscan::gxf::Tensor object inside the entity) which is used by the native operator to perform some computation and then the output tensor (in a new entity) is sent to the ReceiveTensor operator (codelet).
  • The ProcessTensorOp operator uses the method in holoscan::gxf::Tensor to access the tensor data and perform some processing (multiplication by two) on the tensor data.
  • The ReceiveTensor codelet gets the tensor from the entity and prints the tensor data to the terminal.

Notably, the two GXF codelets have not been wrapped as Holoscan operators, but are instead registered at runtime in the compose method of the application.

Run instructions

./examples/tensor_interop/cpp/tensor_interop

Python Tensor interoperability

This application demonstrates interoperability between a native operator (ImageProcessingOp) and two operators (VideoStreamReplayerOp and HolovizOp) that wrap existing C++-based operators using GXF Tensors, through the Holoscan Tensor object (holoscan.core.Tensor).

  • The Holoscan Tensor object is used to get the tensor data from the GXF Entity (holoscan::gxf::Entity) and perform some image processing (time-varying Gaussian blur) on the tensor data.
  • The output tensor (in a new entity) is sent to the HolovizOp operator (codelet) which gets the tensor from the entity and displays the image in the GUI. The VideoStreamReplayerOp operator is used to replay the video stream from the sample data.
  • The Holoscan Tensor object is interoperable with DLPack or array interfaces.

Requirements

This example requires cupy which is included in the x86_64 container. You'll need to build and install cupy for arm64 if you want to run this example on the developer kits.

Run instructions

python3 ./examples/tensor_interop/python/tensor_interop.py

Video Sources

Minimal examples using GXF YAML API to illustrate the usage of various video sources:

  • aja_capture: uses the AJA capture card with GPUDirect RDMA to avoid copies to the CPU. The renderer (holoviz) leverages the videobuffer from CUDA to Vulkan to avoid copies to the CPU also. Requires to set up the AJA hardware and drivers.
  • v4l2_camera: uses Video4Linux as a source, to use with a V4L2 node such as a USB webcam (goes through the CPU). It uses CUDA/OpenGL interop to avoid copies to the CPU.
  • video_replayer: loads a video from the disk, does some format conversions, and provides a basic visualization of tensors.

Requirements

  • aja_capture: follow the setup instructions from the user guide to use the AJA capture card.
  • v4l2_camera: add --device /dev/video0:/dev/video0 to the docker run command to make your USB cameras available to the V4L2 codelet in the container.

Run instructions

./examples/video_sources/gxf/aja_capture
./examples/video_sources/gxf/v4l2_camera
./examples/video_sources/gxf/video_replayer

Building examples

The Holoscan container includes the source code of the examples listed above to showcase how you can build your own applications using Holoscan SDK and CMake. You can build from those source directories or copy them anywhere on the system to experiment with.

Example:

export src_dir="/opt/nvidia/holoscan/examples/example_of_your_choice/language_of_your_choice"
export build_dir="/path/of/your/choice/"
cmake -S $src_dir -B $build_dir -G Ninja -D CMAKE_RELEASE_TYPE=Release
cmake --build $build_dir -j

If you build a C++ or GXF example that depends on external data, make sure to edit any relative path in the yaml config if you want to run from a different working directory.

License

By pulling and using the container, you accept the terms and conditions of this End User License Agreement.