Linux / amd64
This container image contains runtime dependencies, scripts, and the
NVIDIA-proprietary binary packages that are required to build an OpenEmbedded
BSP image for NVIDIA Holoscan Developer Kits.
The Holoscan OpenEmbedded Builder Production Branch, exclusively available with NVIDIA AI Enterprise, is a 9-month supported, API-stable branch that includes monthly fixes for high and critical software vulnerabilities. This branch provides a stable and secure environment for building your mission-critical AI applications. The Holoscan production branch releases every six months with a three-month overlap in between two releases.
Before you start, ensure that your environment is set up by following one of the deployment guides available in the NVIDIA AI Enterprise Documentation.
The following documentation provides information specific to the usage of the
Holoscan Build Container, and may be missing information from the main
documentation that may be useful to know when configuring or using the BSP.
Please see the main README
file for additional documentation.
Note: the main
README
file can be found at
meta-tegra-holoscan/README.md
after following the `1. Setting up
the Local Development Environment` section, below.
Also note that building a BSP for NVIDIA Holoscan requires a significant
amount of resources, and at least 300GB of free disk space is required to
build. See the System Requirements
section in the main README
for more
details.
While it would be possible to build an OE image directly from source that is
stored within a container, doing so would mean that any additions or
modifications to the build recipes would also only live inside the running
container and so would be lost whenever the container terminates. Instead, this
container operates by initially setting up a local host volume with all of the
recipes, dependencies, and initial configuration that is needed for the BSP
build such that all of the recipes, configuration, and build cache is stored in
persistent storage on the host and thus is not limited to the lifespan of a
single container runtime.
In order to perform this initial setup navigate to the directory into which you
would like to initialize the development environment and run the following
(making sure IMAGE
matches the name and tag of this container image):
$ export IMAGE=nvcr.io/nvaie/holoscan-oe-builder-pb23h2:23.10.00
$ docker run -it --rm -v $(pwd):/workspace --network host ${IMAGE} setup.sh ${IMAGE} $(id -u) $(id -g)
This setup processes initializes the following:
poky
meta-openembedded
meta-virtualization
meta-tegra
meta-tegra-holoscan
A sample build configuration in the build
folder.
Wrapper script bitbake.sh
, which runs a build container and passes the
arguments to the script to the container's bitbake command.
A flash.sh
script to flash the device with the built image.
A .buildimage
file which contains the name of the container image.
This is used by the bitbake.sh
script and prevents the need to export
an IMAGE environment variable anytime a build is performed.
The OE image configuration file is created by the previous step and is written
to build/conf/local.conf
. This file is based on the default local.conf
that
is created by the Poky environment setup script (oe-init-build-env
)
and has various NVIDIA configuration defaults and samples added to it.
For example, the MACHINE
configuration in this template file is set to
igx-orin-devkit
; the GPU configration is set to use the dGPU; and CUDA,
TensorRT, Holoscan SDK, and the HoloHub sample applications are installed by
default. This configuration can be used as-is to build a BSP for the IGX Orin
Developer Kit using the A6000 dGPU, but it may be neccessary to change this
configuration to use the iGPU or to add additional components like Rivermax or
support for third-party hardware such as AJA video capture cards or Emergent
high-speed cameras. See the Build Configuration
section in the main README
for more details.
To see the additional configuration that is added to this file relative to the
standard OpenEmbedded local.conf
, as well as some documentation as to what
additional components offered by this meta-tegra-holoscan layer may be enabled,
scroll down to the "BEGIN NVIDIA CONFIGURATION" section in this file.
Once the image has been configured in the local host development tree, the
container image is used again for the actual bitbake
build process. This
can be done using the bitbake.sh
build wrapper that is written to the
root of the development directory. This script simply runs the bitbake
process in the container and passes the arguments to the script to this
process. For example, to build a Holoscan reference image, use the following:
$ ./bitbake.sh core-image-holoscan
This build is expected to take at least an hour with build times of 3 or 4
hours being expected on machines with slower hardware or internet connections.
Note: For a list of different image targets that are available to build,
see the Yocto Project Images List.
Note: If the build fails due to unavailable resource errors, try the
build again. Builds are extremely resource-intensive, and having a number of
particularly large tasks running in parallel can exceed even 32GB of system
memory usage. Repeating the build can often reschedule the tasks so that
they can succeed. If errors are still encountered, try lowering the value
in
build/conf/local.conf
to reduce the maximum number of tasks that BitBake
should run in parallel at any one time.
Using the default configuration, the above script will build the BSP image and
write the final output to:
build/tmp/deploy/images/igx-orin-devkit/core-image-holoscan-igx-orin-devkit.tegraflash.tar.gz
The flash.sh
script can be used to flash the BSP image that is output by the
previous step onto the Holoscan Developer Kit hardware. For example, to flash the
core-image-holoscan
image that was produced by the previous step, connect the
developer kit to the host via the USB-C debug port, put it into recovery
mode, ensure the developer kit is visible to the host using lsusb
, then run:
$ ./flash.sh core-image-holoscan
Note: If the
doflash.sh
command fails due to aNo such file: 'dtc'
error, install the device tree compiler (
dtc
) using the following:
$ sudo apt-get install device-tree-compiler
For instructions on how to put the developer kit into recovery mode and how to
check that it is visible using
lsusb
, see the developer kit user guide:
Note that flashing the device will require root privileges and so you may be
asked for a sudo password by this script.
Once flashed, the Holoscan Developer Kit can then be disconnected from the host
system and booted. A display, keyboard, and mouse should be attached to the
developer kit before it is booted. The display connection depends on the GPU
configuration that was used for the build: the iGPU configuration uses the
onboard Tegra display connection while the dGPU configuration uses one of the
connections on the discrete GPU. Please refer to the developer kit user guide
for diagrams showing the locations of these display connections. During boot
you will see a black screen with only a cursor for a few moments before an X11
terminal or GUI appears (depending on your image type).
When the core-image-holoscan
reference image is used, the Holoscan SDK and
Holohub apps are built into the image, including some tweaks to make running the
samples even easier. Upon boot, the core-image-holoscan
image presents a
Matchbox UI with icons for a variety of Holoscan SDK and Holohob sample
applications, all of which can be run with just a single click.
Note that the first execution of these samples will rebuild the model engine
files and it will take a few minutes before the application fully loads. These
engine files are then cached and will significantly reduce launch times for
successive executions. Check the console windows with the application logs for
additional information.
While a handful of graphical Holoscan applications have icons installed on the
desktop, many more are console-only and must be launched from a console.
When the holoscan-sdk
component is installed, the Holoscan SDK is installed
into the image in the /opt/nvidia/holoscan
directory, with examples present in
the examples
subdirectory. Due to relative data paths being used by the apps,
these examples should be run from the /opt/nvidia/holoscan
directory. To run
the C++ version of an example, simply run the executable in the example's cpp
subdirectory:
$ cd /opt/nvidia/holoscan
$ ./examples/hello_world/cpp/hello_world
To run the Python version of an example, run the application in the example's
python
subdirectory using python3
:
$ cd /opt/nvidia/holoscan
$ python3 ./examples/hello_world/python/hello_world.py
When the holohub-apps
component is installed, the HoloHub sample applications
are installed into the image in the /opt/nvidia/holohub
directory, with the
applications present in the applications
subdirectory. Due to relative data
paths being used by the apps, these applications should be run from the
/opt/nvidia/holohub
directory. To run the C++ version of an application,
simply run the executable in the applications's cpp
subdirectory:
$ cd /opt/nvidia/holohub
$ ./applications/endoscopy_tool_tracking/cpp/endoscopy_tool_tracking
To run the Python version of an application, run the application in the
python
subdirectory using python3
:
$ cd /opt/nvidia/holohub
$ python3 ./applications/endoscopy_tool_tracking/python/endoscopy_tool_tracking.py
Note that the first execution of the samples will rebuild the model engine files
and it will take a few minutes before the application fully loads. These engine
files are then cached and will significantly reduce launch times for successive
executions.
Please review the Security Scanning tab to view the latest security scan results.
For certain open-source vulnerabilities listed in the scan results, NVIDIA provides a response in the form of a Vulnerability Exploitability eXchange (VEX) document. The VEX information can be reviewed and downloaded from the Security Scanning tab.
Get access to knowledge base articles and support cases or submit a ticket.
Visit the NVIDIA AI Enterprise Documentation Hub for release documentation, deployment guides and more.
Go to the NVIDIA Licensing Portal to manage your software licenses. licensing portal for your products. Get Your Licenses