NGC | Catalog
CatalogContainersNVIDIA cuQuantum Appliance

NVIDIA cuQuantum Appliance

Logo for NVIDIA cuQuantum Appliance
Features
Description
The NVIDIA cuQuantum Appliance is a highly performant multi-GPU multi-node solution for quantum circuit simulation. It contains NVIDIA’s cuStateVec and cuTensorNet libraries which optimize state vector and tensor network simulation, respectively.
Publisher
NVIDIA
Latest Tag
23.10-devel-ubuntu22.04-arm64
Modified
April 1, 2024
Compressed Size
5.82 GB
Multinode Support
Yes
Multi-Arch Support
No
23.10-devel-ubuntu22.04-arm64 (Latest) Security Scan Results

Linux / arm64

Sorry, your browser does not support inline SVG.

NVIDIA cuQuantum Appliance

The NVIDIA cuQuantum Appliance is a highly performant multi-GPU multi-node
solution for quantum circuit simulation. It contains NVIDIA’s cuStateVec and
cuTensorNet libraries which optimize state vector and tensor network simulation,
respectively. The cuTensorNet library functionality is accessible through Python
for Tensor Network operations. With the cuStateVec libraries, NVIDIA provides
the following simulators:

IBM’s Qiskit Aer frontend via cusvaer, NVIDIA’s distributed state vector
backend solver.
An optimized multi-GPU Google Cirq frontend via qsim, Google’s state vector
simulator.


Prerequisites

Using NVIDIA’s cuQuantum Appliance NGC Container requires the host system to
have the following installed:

Docker Engine
NVIDIA GPU Drivers
NVIDIA Container Toolkit
For supported versions, see the container release notes. No other installation, compilation, or dependency management is required.


Running the NVIDIA cuQuantum Appliance with Cirq or Qiskit

# pull the image
...$ docker pull nvcr.io/nvidia/cuquantum-appliance:23.10
# launch the container interactively
...$ docker run --gpus all \
       -it --rm nvcr.io/nvidia/cuquantum-appliance:23.10
# interactive launch, but enumerate only GPUs 0,3
...$ docker run --gpus '"device=0,3"' \
       -it --rm nvcr.io/nvidia/cuquantum-appliance:23.10

The examples are located under /home/cuquantum/examples. Confirm this with the
following command:

...$ docker run --gpus all --rm \
...$ nvcr.io/nvidia/cuquantum-appliance:23.10 ls \
       -la /home/cuquantum/examples
...

==========================================================================
===                 NVIDIA CUQUANTUM APPLIANCE v23.10                  ===
==========================================================================
=== COPYRIGHT © NVIDIA CORPORATION & AFFILIATES.  All rights reserved. ===
==========================================================================

INFO: nvidia devices detected
INFO: gpu functionality will be available

total 36
drwxr-xr-x 2 cuquantum cuquantum 4096 Nov 10 01:52 .
drwxr-x--- 1 cuquantum cuquantum 4096 Nov 10 01:54 ..
-rw-r--r-- 1 cuquantum cuquantum 2150 Nov 10 01:52 ghz.py
-rw-r--r-- 1 cuquantum cuquantum 7436 Nov 10 01:52 hidden_shift.py
-rw-r--r-- 1 cuquantum cuquantum 1396 Nov 10 01:52 qiskit_ghz.py
-rw-r--r-- 1 cuquantum cuquantum 8364 Nov 10 01:52 simon.py

Running the examples is straightforward:

#### without an interactive session:
...$ docker run --gpus all --rm \
       nvcr.io/nvidia/cuquantum-appliance:23.10 \
         python /home/cuquantum/examples/{example_name}.py
#### with an interactive session:
...$ docker run --gpus all --rm -it \
       nvcr.io/nvidia/cuquantum-appliance:23.10
...
(cuquantum-23.10) cuquantum@...:~$ cd examples && python {example_name}.py

The examples all accept runtime arguments. To see what they are, pass --help
to the python + script command. Looking at two examples, ghz.py and
qiskit_ghz.py, the help messages are as follows:

(cuquantum-23.10) cuquantum@...:~/examples$ python ghz.py --help
usage: ghz.py [-h] [--nqubits NQUBITS] [--nsamples NSAMPLES] [--ngpus NGPUS]

GHZ circuit

options:
  -h, --help           show this help message and exit
  --nqubits NQUBITS    the number of qubits in the circuit
  --nsamples NSAMPLES  the number of samples to take
  --ngpus NGPUS        the number of GPUs to use
(cuquantum-23.10) cuquantum@...:~/examples$ python qiskit_ghz.py --help
usage: qiskit_ghz.py [-h] [--nbits NBITS] [--precision {single,double}] [--disable-cusvaer]

Qiskit ghz.

options:
  -h, --help            show this help message and exit
  --nbits NBITS         the number of qubits
  --precision {single,double}
                        numerical precision
  --disable-cusvaer     disable cusvaer

Importantly, ghz.py implements the GHZ circuit using Cirq as a frontend, and
qiskit_ghz.py implements the GHZ circuit using Qiskit as a frontend. The
cuQuantum Appliance modifies the backends of these frameworks, optimizing them
for use with Nvidia's platforms. Information regarding any alterations are
available in the Appliance section of the Nvidia cuQuantum documentation.

Running cd examples && python ghz.py --nqubits 30 will create and simulate a
GHZ circuit running on a single GPU. To run on 4 available GPUs, use
... python ghz.py --nqubits 30 --ngpus 4. The output will look something like this:

(cuquantum-23.10) cuquantum@...:~/examples$ python ghz.py --nqubits 30
q(0),...,q(29)=111,...,111

Likewise, cd examples && python qiskit_ghz.py --nbits 30 will create and
simulate a GHZ circuit running on all available GPUs. To run on 4 GPUs, you
need to launch the container and explicitly enumerate the GPUs you want to use:

#### interactively:
...$ docker run --gpus '"device=0,1,2,3"' \
       -it --rm nvcr.io/nvidia/cuquantum-appliance:23.10
(cuquantum-23.10) cuquantum@...:~$ cd examples
(cuquantum-23.10) cuquantum@...:~$ python qiskit_ghz.py --nbits 30
#### noninteractively:
...$ docker run --gpus '"device=0,1,2,3"' \
       --rm nvcr.io/nvidia/cuquantum-appliance:23.10 \
       python /home/cuquantum/examples/qiskit_ghz.py --nbits 30

The output from qiskit_ghz.py looks like this:

(cuquantum-23.10) cuquantum@...:~$ cd examples
(cuquantum-23.10) cuquantum@...:~$ python qiskit_ghz.py --nbits 30
...
precision: single
{'0...0': 520, '1...1': 504}

More information, examples, and utilities are available in the NVIDIA cuQuantum
repository on GitHub
. Notably, you can
find useful guides for getting started with multi-node multi-GPU simulation
using the benchmarks tools.


Known issues

For tags: *23.10-*-arm64

When using ssh in the container, the following error is emitted:

(cuquantum-23.10) cuquantum@...:~$ ssh ...
OpenSSL version mismatch. ...

As a workaround, please specify LD_PRELOAD=/usr/lib/aarch64-linux-gnu/libcrypto.so.3:

LD_PRELOAD=/usr/lib/aarch64-linux-gnu/libcrypto.so.3 ssh ...

Software in the container

Default user environment

The default user in the container is cuquantum with user ID 1000. The
cuquantum user is a member of the sudo group. By default, executing commands
with sudo using the cuquantum user requires a password which can be obtained
by reading the file located at /home/cuquantum/.README formatted as
{user}:{password}.

To acquire new packages, we recommend using conda install -c conda-forge ...
in the default environment (cuquantum-23.10). You may clone this environment
and change the name using conda create --name {new_name} --clone cuquantum-23.10.
This may be useful in isolating your changes from the default environment.

CUDA is available under /usr/local/cuda. /usr/local/cuda is a symbolic
directory managed by update-alternatives. To query configuration information,
use update-alternatives --config cuda.

MPI

We provide Open MPI v4.1 in the container located at /usr/local/openmpi. The
default mpirun runtime configuration can be queried with ompi_info --all --parseable.
When using the multi-GPU features in the cuQuantum Appliance, a valid and compatible
mpirun runtime configuration must be exposed to the
container. It must also be accessible to the container runtime.

If you observe warnings or errors as follows when calling mpirun in the container:

[LOG_CAT_ML] You must specify a valid HCA device by setting:
-x HCOLL_MAIN_IB=<dev_name:port> or  or -x UCX_NET_DEVICES=<dev_name:port>.
If no device was specified for HCOLL (or the calling library), automatic device detection will be run.
...
In case of unfounded HCA device please contact your system administrator.
...
... Error: coll_hcoll_module.c:310 - mca_coll_hcoll_comm_query() Hcol library init failed

In an interactive session of the container, specify modular component
architectures, to disable cross-memory attach (CMA) and hierarchical collectives
(HCOLL):

mpirun -np ${num_gpus} \
    --mca pml ucx \
    -x UCX_TLS=^cma \
    --mca coll_hcoll_enable 0 \
    -x OMPI_MCA_coll_hcoll_enable=0 \
    {your_command}

If the warnings and errors are no longer emitted, please consult your system
administrator
and confirm the hardware and software architecture to ensure
optimal usage of the cuQuantum Appliance.


Important change notices

version == 23.10

The following image tags are available:

nvcr.io/nvidia/cuquantum-appliance:23.10
nvcr.io/nvidia/cuquantum-appliance:23.10-devel-ubuntu22.04

nvcr.io/nvidia/cuquantum-appliance:23.10-arm64
nvcr.io/nvidia/cuquantum-appliance:23.10-devel-ubuntu22.04-arm64

nvcr.io/nvidia/cuquantum-appliance:23.10-devel-ubuntu20.04

Before v23.10, the operating system in the container was Ubuntu 20.04. In
v23.10, we added support for Ubuntu 22.04 without dropping support for Ubuntu
20.04. To avoid breaking changes implied by altering the image tag,
nvcr.io/nvidia/cuquantum-appliance:23.10 now points to
nvcr.io/nvidia/cuquantum-appliance:23.10-devel-ubuntu22.04.

This means that for a given machine architecture,march='arm64' or
march='x86_64', pulling from cuquantum-appliance:23.10-${march} is
equivalent to pulling from cuquantum-appliance:23.10-${march}. The following
two docker pull commands will download the same image.

docker pull nvcr.io/nvidia/cuquantum-appliance:23.10*
docker pull nvcr.io/nvidia/cuquantum-appliance:23.10-devel-ubuntu22.04*

Security scanning notices

Version 23.10 security scanning results summary

This section provides a summary of potential vulnerabilities that are evaluated
with high severity by the CVSSv3.1 standard. To view security
scanning results for the latest container image, refer to the security scanning
tab near the top of this page, or follow this link.

CVE ID SCORE VECTOR STATUS DESCRIPTION REFERENCES
CVE-2023-36632 7.5 CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:N/I:N/A:H DISPUTED RecursionError in email.utils.parseaddr while calling Python object CVE
CVE-2018-20225 7.8 CVSS:3.1/AV:L/AC:L/PR:N/UI:R/S:U/C:H/I:H/A:H DISPUTED remote code execution with malicious url using --extra-index-url option in pip CVE

Appliance version end of life summary

VERSION STATUS NOTICE
23.10 SUPPORTED N/A
23.06 SUPPORTED EOL 24.03
23.03 EOL No new features or security remediation
22.* EOL No new features or security remediation

Note: for a version formatted as YY.*, the notice applies to all versions with
the same year.


Documentation

The NVIDIA cuQuantum Appliance documentation is hosted here.
A guide for using qiskit can be found here.
A guide and tutorials for using cirq can be found here.
A guide to getting started with qsimcirq can be found here.


Additional Resources

The NVIDIA cuQuantum SDK Homepage
The NVIDIA cuQuantum Python Bindings and Examples

For a general guide on pulling and running containers, see Pulling a Container Image and
Running a Container in the NGC Container User Guide.


License Agreement

The image is governed by the NVIDIA End User License Agreement.
By downloading the NVIDIA cuQuantum Appliance, you accept the
terms and conditions of this license. The cuQuantum Appliance
End User License Agreement can be viewed here. Since the image includes
components licensed under open-source licenses, the source code
for these components can be found here