Linux / amd64
Linux / arm64
The Holoscan container is part of NVIDIA Holoscan, the AI sensor processing platform that combines hardware systems for low-latency sensor and network connectivity, optimized libraries for data processing and AI, and core microservices to run streaming, imaging, and other applications, from embedded to edge to cloud. It can be used to build streaming AI pipelines for a variety of domains, including Medical Devices, High Performance Computing at the Edge, Industrial Inspection and more.
In previous releases, the prefix
Clara
was used to define Holoscan as a platform designed initially for medical devices. As Holoscan has grown, its potential to serve other areas has become apparent. With version 0.4.0, we're proud to announce that the Holoscan SDK is now officially built to be domain-agnostic and can be used to build sensor AI applications in multiple domains. Note that some of the content of the SDK (sample applications) or the documentation might still appear to be healthcare-specific pending additional updates. Going forward, domain specific content will be hosted on the HoloHub repository.
The Holoscan container includes the Holoscan libraries, GXF extensions, headers, example source code, and sample datasets. It is the simplest way to run sample streaming applications or create your own application using the Holoscan SDK.
Visit the NGC demo website for a live demonstration of some of Holoscan capabilities.
The Holoscan container is designed to run on any of the Holoscan Developer Kits (aarch64) as well as x86_64 systems.
For a full list of Holoscan documentation, visit the Holoscan developer page.
Set up your developer kit:
Make sure you have joined the Holoscan SDK Program and, if needed, the RiverMax SDK Program before using the NVIDIA SDK Manager.
SDK Manager will install Holopack 1.1 as well as the nvgpuswitch.py
script. Once configured for dGPU mode, your developer kit will include the following necessary components to run the container:
Refer to the User Guide for additional steps to support the AJA capture card.
You'll need the following to run the container on x86_64:
Press the Copy Image Path
button at the top of this webpage, choose the version you want to test, and set this as your NGC_CONTAINER_IMAGE_PATH
in your terminal
# For example
export NGC_CONTAINER_IMAGE_PATH="nvcr.io/nvidia/clara-holoscan/holoscan:v0.4.0"
Ensure that X11 is configured to allow commands from docker:
xhost +local:docker
Start the container
Add
--device /dev/ajantv20:/dev/ajantv20
in the docker run command if you also have an AJA capture card you'd like to access from the container. Similarly, add--device /dev/video0:/dev/video0
(and/orvideo1
, etc...) to make your USB cameras available to the V4L2 codelet in the container.
# Find the nvidia_icd.json file which could reside at different paths
# Needed due to https://github.com/NVIDIA/nvidia-container-toolkit/issues/16
nvidia_icd_json=$(find /usr/share /etc -path '*/vulkan/icd.d/nvidia_icd.json' -type f 2>/dev/null | grep .) || (echo "nvidia_icd.json not found" >&2 && false)
# Run the container
docker run -it --rm --net host \
--runtime=nvidia \
-v /tmp/.X11-unix:/tmp/.X11-unix \
-v $nvidia_icd_json:$nvidia_icd_json:ro \
-e NVIDIA_DRIVER_CAPABILITIES=graphics,video,compute,utility,display \
-e DISPLAY=$DISPLAY \
${NGC_CONTAINER_IMAGE_PATH}
The Holoscan container includes the applications below. C++ applications currently need to be run from /opt/nvidia/holoscan
to resolve the paths to the datasets. Python applications can run from any working directory since HOLOSCAN_SAMPLE_DATA_PATH
is set as an environment variable in the container.
Based on a LSTM (long-short term memory) stateful model, these applications demonstrate the use of custom components for tool tracking, including composition and rendering of text, tool position, and mask (as heatmap) combined with the original video stream.
The provided applications are configured to either use the AJA capture card for input stream, or a pre-recorded endoscopy video (replayer). Follow the setup instructions from the user guide to use the AJA capture card.
📦️ (NGC) Sample App Data for AI-based Endoscopy Tool Tracking
Using a pre-recorded video
# C++
sed -i -e 's#^source:.*#source: replayer#' ./apps/endoscopy_tool_tracking/cpp/app_config.yaml \
&& ./apps/endoscopy_tool_tracking/cpp/endoscopy_tool_tracking
# Python
python3 ./apps/endoscopy_tool_tracking/python/endoscopy_tool_tracking.py --source=replayer
Using an AJA card
# C++
sed -i -e 's#^source:.*#source: aja#' ./apps/endoscopy_tool_tracking/cpp/app_config.yaml \
&& ./apps/endoscopy_tool_tracking/cpp/endoscopy_tool_tracking
# Python
python3 ./apps/endoscopy_tool_tracking/python/endoscopy_tool_tracking.py --source=aja
Full workflow including a generic visualization of segmentation results from a spinal scoliosis segmentation model of ultrasound videos. The model used is stateless, so this workflow could be configured to adapt to any vanilla DNN model.
The provided applications are configured to either use the AJA capture card for input stream, or a pre-recorded video of the ultrasound data (replayer). Follow the setup instructions from the user guide to use the AJA capture card.
📦️ (NGC) Sample App Data for AI-based Bone Scoliosis Segmentation
Using a pre-recorded video
# C++
sed -i -e 's#^source:.*#source: replayer#' ./apps/ultrasound_segmentation/cpp/app_config.yaml \
&& ./apps/ultrasound_segmentation/cpp/ultrasound_segmentation
# Python
python3 ./apps/ultrasound_segmentation/python/ultrasound_segmentation.py --source=replayer
Using an AJA card
# C++
sed -i -e 's#^source:.*#source: aja#' ./apps/ultrasound_segmentation/cpp/app_config.yaml \
&& ./apps/ultrasound_segmentation/cpp/ultrasound_segmentation
# Python
python3 ./apps/ultrasound_segmentation/python/ultrasound_segmentation.py --source=aja
This application demonstrates how to run multiple inference pipelines in a single application by leveraging the Holoscan Inference module, a framework that facilitates designing and executing inference applications in the Holoscan SDK.
The Multi AI operators (inference and postprocessor) use APIs from the Holoscan Inference module to extract data, initialize and execute the inference workflow, process, and transmit data for visualization.
The applications uses models and echocardiogram data from iCardio.ai. The models include:
The provided applications are configured to either use the AJA capture card for input stream, or a pre-recorded video of the echocardiogram (replayer). Follow the setup instructions from the user guide to use the AJA capture card.
📦️ (NGC) Sample App Data for Multi-AI Ultrasound Pipeline
Using a pre-recorded video
# C++
sed -i -e 's#^source:.*#source: replayer#' ./apps/multiai/cpp/app_config.yaml \
&& ./apps/multiai/cpp/multiai
# Python
python3 ./apps/multiai/python/multiai.py --source=replayer
Using an AJA card
# C++
sed -i -e 's#^source:.*#source: aja#' ./apps/multiai/cpp/app_config.yaml \
&& ./apps/multiai/cpp/multiai
# Python
python3 ./apps/multiai/python/multiai.py --source=aja
The Holoscan container includes the examples below to showcase specific features of the Holoscan SDK. C++ and GXF examples currently need to be ran from /opt/nvidia/holoscan
to resolve the paths to the datasets. Python applications can run from any working directory.
Minimal example to demonstrate the use of adding components in a pipeline. The workflow in the example tracks tools in the endoscopy video sample data.
📦️ (NGC) Sample App Data for AI-based Endoscopy Tool Tracking
# C++
./examples/basic_workflow/cpp/basic_workflow
# Python
python3 ./examples/basic_workflow/python/basic_workflow.py
This example shows how to use the Bring Your Own Model (BYOM) concept for Holoscan by changing a few properties of the ultrasound_segmentation
app to run a segmentation of polyps from a colonoscopy video input instead.
📦️ (NGC) Sample App Data for AI Colonoscopy Segmentation of Polyps
# Update the configurations (run again to reverse)
patch -ub -p0 -i examples/bring_your_own_model/python/colonoscopy_segmentation.patch
# Run the application
python3 ./apps/ultrasound_segmentation/python/ultrasound_segmentation.py
These examples demonstrate how to use native operators (the operators that do not have an underlying, pre-compiled GXF Codelet):
native_operator
(C++): This example shows the application using only native operators. There are three operators involved:out1
and odd integers on port out2
.ping.py
: This example is similar to the C++ native operator example, using Python.convolve.py
: This example demonstrates a simple 1D convolution-based signal processing application, to demonstrate passing NumPy arrays between operators as Python objects.# C++
./examples/native_operator/cpp/ping
# Python
python3 ./examples/native_operator/python/ping.py
python3 ./examples/native_operator/python/convolve.py
This application demonstrates interoperability between a native operator (ProcessTensorOp
) and two GXF Codelets (SendTensor
and ReceiveTensor
).
holoscan::gxf::Entity
so that this operator can talk directly to the GXF codelets which send/receive GXF entities.holoscan::Tensor
which is converted to holoscan::gxf::Tensor
object inside the entity) which is used by the native operator to perform some computation and then the output tensor (in a new entity) is sent to the ReceiveTensor
operator (codelet).ProcessTensorOp
operator uses the method in holoscan::gxf::Tensor
to access the tensor data and perform some processing (multiplication by two) on the tensor data.ReceiveTensor
codelet gets the tensor from the entity and prints the tensor data to the terminal.Notably, the two GXF codelets have not been wrapped as Holoscan operators, but are instead registered at runtime in the compose
method of the application.
./examples/tensor_interop/cpp/tensor_interop
This application demonstrates interoperability between a native operator (ImageProcessingOp
) and two operators (VideoStreamReplayerOp
and HolovizOp
) that wrap existing C++-based operators using GXF Tensors, through the Holoscan Tensor object (holoscan.core.Tensor
).
holoscan::gxf::Entity
) and perform some image processing (time-varying Gaussian blur) on the tensor data.HolovizOp
operator (codelet) which gets the tensor from the entity and displays the image in the GUI. The VideoStreamReplayerOp
operator is used to replay the video stream from the sample data.This example requires cupy which is included in the x86_64 container. You'll need to build and install cupy for arm64 if you want to run this example on the developer kits.
python3 ./examples/tensor_interop/python/tensor_interop.py
Minimal examples using GXF YAML API to illustrate the usage of various video sources:
aja_capture
: uses the AJA capture card with GPUDirect RDMA to avoid copies to the CPU. The renderer (holoviz) leverages the videobuffer from CUDA to Vulkan to avoid copies to the CPU also. Requires to set up the AJA hardware and drivers.v4l2_camera
: uses Video4Linux as a source, to use with a V4L2 node such as a USB webcam (goes through the CPU). It uses CUDA/OpenGL interop to avoid copies to the CPU.video_replayer
: loads a video from the disk, does some format conversions, and provides a basic visualization of tensors.aja_capture
: follow the setup instructions from the user guide to use the AJA capture card.v4l2_camera
: add --device /dev/video0:/dev/video0
to the docker run
command to make your USB cameras available to the V4L2 codelet in the container../examples/video_sources/gxf/aja_capture
./examples/video_sources/gxf/v4l2_camera
./examples/video_sources/gxf/video_replayer
The Holoscan container includes the source code of the examples listed above to showcase how you can build your own applications using Holoscan SDK and CMake. You can build from those source directories or copy them anywhere on the system to experiment with.
Example:
export src_dir="/opt/nvidia/holoscan/examples/example_of_your_choice/language_of_your_choice"
export build_dir="/path/of/your/choice/"
cmake -S $src_dir -B $build_dir -G Ninja -D CMAKE_RELEASE_TYPE=Release
cmake --build $build_dir -j
If you build a C++ or GXF example that depends on external data, make sure to edit any relative path in the yaml config if you want to run from a different working directory.
By pulling and using the container, you accept the terms and conditions of this End User License Agreement.