Linux / amd64
NVIDIA Holoscan is the the AI sensor processing platform that combines hardware systems for low-latency sensor and network connectivity, optimized libraries for data processing and AI, and core microservices to run streaming, imaging, and other applications, from embedded to edge to cloud. It can be used to build streaming AI pipelines for a variety of domains, including Medical Devices, High Performance Computing at the Edge, Industrial Inspection and more.
The Holoscan container includes the Holoscan libraries, GXF extensions, headers, example source code, and sample datasets, as well as all the dependencies that were tested with Holoscan. It is the recommended way to run the Holoscan examples, while still allowing you to create your own C++ and Python Holoscan application.
The Holoscan Production Branch, exclusively available with NVIDIA AI Enterprise, is a 9-month supported, API-stable branch that includes monthly fixes for high and critical software vulnerabilities. This branch provides a stable and secure environment for building your mission-critical AI applications. The Holoscan production branch releases every six months with a three-month overlap in between two releases. The Holoscan SDK version used for this Production Branch is 2.0.0.
Before you start, ensure that your environment is set up by following one of the deployment guides available in the NVIDIA IGX Orin Documentation.
Visit the Holoscan User Guide to get started with the Holoscan SDK.
Prerequisites for each supported platform are documented in the user guide.
Additionally, you'll need the NVIDIA Container Toolkit version 1.14.1 and Docker.
Log in to the NGC docker registry
docker login nvcr.io
Press the Get Container
button at the top of this webpage and choose the version you want to use. You can set it as NGC_CONTAINER_IMAGE_PATH
in your terminal for the next steps to use:
# For example
export NGC_CONTAINER_IMAGE_PATH="nvcr.io/nvaie/holoscan-pb24h1:24.05.07"
If using a display, ensure that X11 is configured to allow commands from docker:
xhost +local:docker
Start the container
docker run -it --rm --net host \
--runtime=nvidia \
-e NVIDIA_DRIVER_CAPABILITIES=all \
-v /tmp/.X11-unix:/tmp/.X11-unix \
-e DISPLAY=$DISPLAY \
--ipc=host \
--cap-add=CAP_SYS_PTRACE \
--ulimit memlock=-1 \
--ulimit stack=67108864 \
${NGC_CONTAINER_IMAGE_PATH}
--runtime=nvidia
and -e NVIDIA_DRIVER_CAPABILITIES
are properties of the nvidia container toolkit to leverage the NVIDIA GPUs and their capabilities. Read more here.-v /tmp/.X11-unix
and -e DISPLAY
are needed to enable X11 display forwarding.--ipc=host
, --cap-add=CAP_SYS_PTRACE
, --ulimit memlock=-1
and --ulimit stack=67108864
are required to run distributed applications with UCX. Read more here. To expose additional hardware devices from your host to the container, add the --privileged
flag to docker run (not secure), or mount their explicit device nodes by adding the flags below:
--device /dev/ajantv20
(and/or ajantv2<n>
).--device /dev/video0
(and/or video<n>
). If configuring a non-root user in the container, add --group-add video
or ensure the user has appropriate permissions to the video device nodes (/dev/video*
).--device /dev/infiniband/rdma_cm
and --device /dev/infiniband/uverbs0
(and/or uverbs<n>
).The Holoscan SDK is installed under /opt/nvidia/holoscan
. It includes a CMake configuration file inside lib/cmake/holoscan
, allowing you to import holoscan in your CMake project (link libraries + include headers):
find_package(holoscan REQUIRED CONFIG PATHS "/opt/nvidia/holoscan")
target_link_libraries(yourTarget PUBLIC holoscan::core)
Alternatives to hardcoding PATHS
inside find_package
in CMake are listed under the Config Mode Search Procedure documentation.
For python developers, the PYTHONPATH
is already set to include /opt/nvidia/holoscan/python/lib
, allowing you to just call import holoscan
.
Python, C++, and GXF examples are installed in /opt/nvidia/holoscan/examples
alongside their source code, and run instructions (also available on the GitHub repository).
Example to run the Hello World example:
# Python
python3 /opt/nvidia/holoscan/examples/hello_world/python/hello_world.py
# C++
/opt/nvidia/holoscan/examples/hello_world/cpp/hello_world
Refer to the README in each example folder for specific run instructions.
You can rebuild the C++ and GXF examples as-is or copy them anywhere on your system to experiment with.
Example to build all the C++ and GXF examples:
export src_dir="/opt/nvidia/holoscan/examples/" # Add "<example_of_your_choice>/cpp" to build a specific example
export build_dir="/opt/nvidia/holoscan/examples/build" # Or the path of your choice
cmake -S $src_dir -B $build_dir -D Holoscan_ROOT="/opt/nvidia/holoscan" -G Ninja
cmake --build $build_dir -j
Also see the HoloHub repository for a collection of Holoscan operators and applications which you can use in your pipeline or for reference.
Please review the Security Scanning tab to view the latest security scan results.
For certain open-source vulnerabilities listed in the scan results, NVIDIA provides a response in the form of a Vulnerability Exploitability eXchange (VEX) document. The VEX information can be reviewed and downloaded from the Security Scanning tab.
There is a bug in RAPIDS whereby attempting to serialize any cudf
dataframe whose column names are numpy
integers will result in a TypeError
similar to TypeError: can not serialize 'numpy.int64' object
. A fix will be provided in the next Production Branch October 2024 (PB24h2) release. As a workaround, users should rewrite the dataframe column names by getting the underlying int/float value from the numpy
type and reassigning that value as the column name.
Get access to knowledge base articles and support cases. File a Ticket
Learn more about how to deploy NVIDIA AI Enterprise and access more technical information by visiting the documentation hub.
Access the NVIDIA Licensing Portal to manage your software licenses.