Linux / amd64
NVIDIA IndeX™ is a leading volume visualization tool for HPC. It takes advantage of the computational horsepower of GPUs to deliver real-time performance on large datasets by distributing visualization workloads across a GPU-accelerated cluster.
The present NVIDIA IndeX docker image represents a restricted demo of NVIDIA IndeX. When started (please follow the instructions below) the browser shows a visualization of a Core-collapse Supernova volume dataset (courtesy note). NVIDIA IndeX enables you to
These interactions, amongst other features, enable you to
More info on NVIDIA IndeX can be found on it's page website. For licenes or other inquiries please contact us.
See the NGC Container User Guide for prerequisites and setup steps for all HPC containers.
The document also describes the steps to pull NGC containers.
The present release is based on CUDA 11.1 and requires GPU/CUDA driver version 455.23 or higher.
This example illustrates the steps to pull and run the NVIDIA IndeX container from the nvidia-docker command line interface CLI. First, please issue the following command to log into the NGC container registry:
docker login nvcr.io
When prompted for a user name please $oauthtoken
which is a special username that indicates that you will authenticate with an API and not with a username and password. You will then be asked for the password. Here, please enter your NVIDIA GPU Cloud API key.
For pulling the NVIDIA IndeX docker image please issue the following command:
docker pull nvcr.io/nvidia-hpcvis/index:2.2
Below you'll find information on how to run the container on one or multiple hosts with docker or singularity.
Once the docker image is downloaded to your machine, please run the NVIDIA IndeX container as follows:
docker run --runtime nvidia -p 8080:8080 nvcr.io/nvidia-hpcvis/index:2.2 --single
The NVIDIA IndeX server applications starts immediately and loads a dataset. You can now connect with the Chrome (or Chromium) browser to the NVIDIA IndeX server running on the machine. Please open
http://<ip>:8080
in your Chrome browser, where the IP refers to the server that runs the NVIDIA IndeX. The Chrome browser loads the HTML5-based NVIDIA IndeX client web interface. These connection instructions apply for the next scenarios as well.
Please note that multi-node containers need a special license.
There are a few considerations to take into account when starting a IndeX cluster:
Launching the viewer:
sudo docker run --runtime nvidia \
-p 8080:8080 -p 10000:10000 -p 10001:10001 -p 5555:5555 \
nvcr.io/nvidia-hpcvis/index:2.2 \
-dice::network::mode TCP_WITH_DISCOVERY \
-dice::network::discovery_address $VIEWER_DISC_IP5555 \
-app::cluster_size 2
Launching the worker node:
sudo docker run \
-p 8080:8080 -p 10000:10000 -p 10001:10001 -p 5555:5555 \
nvcr.io/nvidia-hpcvis/index:2.2 \
-dice::network::mode TCP_WITH_DISCOVERY \
-dice::network::discovery_address $VIEWER_DISC_IP:5555 \
-app::cluster_size 2 \
-app::host_mode remote_service
If you are having multiple interfaces attached to your hosts (in different subnets), please select the cluster interface by choosing it's subnet (-dice::network::cluster_interface_address 192.168.1.0/24
) or ip (-dice::network::cluster_interface_address 192.168.1.X
).
For Infiniband you will have to use host networking with a few tweaks to access the hardware capabilities.
Launching the viewer:
sudo docker run --runtime nvidia \
--shm-size='16G' --device=/dev/infiniband --cap-add=IPC_LOCK --net=host \
nvcr.io/nvidia-hpcvis/index:2.2 \
-dice::network::mode TCP_WITH_DISCOVERY \
-dice::network::discovery_address $VIEWER_DISC_IP:5555 \
-app::cluster_size 2
Launching the worker node:
sudo docker run \
--shm-size='16G' --device=/dev/infiniband --cap-add=IPC_LOCK --net=host \
nvcr.io/nvidia-hpcvis/index:2.2 \
-dice::network::mode TCP_WITH_DISCOVERY \
-dice::network::discovery_address $VIEWER_DISC_IP:5555 \
-app::cluster_size 2 \
-app::host_mode remote_service
singularity run --nv docker://nvcr.io/nvidia-hpcvis/index:2.2 --single --components
The NVIDIA IndeX server applications starts immediately and loads a dataset. You can now connect with the Chrome (or Chromium) browser to the NVIDIA IndeX server running on the machine. Please open
http://<ip>:8080
in your Chrome browser, where the IP refers to the server that runs the NVIDIA IndeX. The Chrome browser loads the HTML5-based NVIDIA IndeX client web interface.
Starting the Viewer node:
singularity run --nv docker://nvcr.io/nvidia-hpcvis/index:2.2 \
-dice::network::mode TCP_WITH_DISCOVERY \
-dice::network::discovery_address $VIEWER_DISC_IP:5555 \
-app::cluster_size 2
Starting the Worker node:
singularity run --nv docker://nvcr.io/nvidia-hpcvis/index:2.2 \
--add project_remote.prj \
-dice::network::mode TCP_WITH_DISCOVERY \
-dice::network::discovery_address $VIEWER_DISC_IP:5555 \
If you are having multiple interfaces attached to your hosts (in different subnets), please select the cluster interface by choosing it's subnet (-dice::network::cluster_interface_address 192.168.1.0/24
) or ip (-dice::network::cluster_interface_address 192.168.1.X
).
When using Slurm, the container scripts detect this automatically so you can launch everything with one command from the launch node:
srun -N3 \
singularity run --nv docker://nvcr.io/nvidia-hpcvis/index:2.2 \
-app::cluster_size 3
Where the size cluster is 3.
However, if you want to run the commands individually like in the previous sections, disable the Slurm job ID export SLURM_JOB_ID=
srun -N3 \
singularity run --nv docker://nvcr.io/nvidia-hpcvis/index:2.2 \
-app::cluster_size 3
If you have your own NVIDIA IndeX license, you must bind mount to the container path /opt/nvidia-index/demo/license.lic
.
For Docker, add the following command line parameter:
-v /host/path/to/license.lic:/opt/nvidia-index/demo/license.lic
For Singularity, add the following command line parameter:
-B /host/path/to/license.lic:/opt/nvidia-index/demo/license.lic