NGC | Catalog


For copy image paths and more information, please view on a desktop device.
Logo for DGL


This container is built with the latest version of Deep Graph Library(DGL), PyTorch, and their dependencies.



Latest Tag



September 27, 2023

Compressed Size

10.23 GB

Multinode Support


Multi-Arch Support


23.09-py3 (Latest) Scan Results

Linux / amd64

Linux / arm64

What is inside this container?

Deep Graph Library (DGL) is a Python package built for the implementation and training of graph neural networks on top of existing DL frameworks. NGC Containers are the easiest way to get started with DGL. The DGL NGC Container is built with the latest versions of Deep Graph Library (DGL), PyTorch, and their dependencies.

The DGL NGC Container is optimized for GPU acceleration and contains a validated set of libraries that enable and optimize GPU performance. This container also contains software for accelerating data sampling and ETL (cuGraph, NVIDIA Rapids), Training (cuDNN, NCCL), and Inference (TensorRT) workloads.


There are two main prerequisites for DGL containers:

Running the container

Use the following commands to run the container, where <xx.xx> is the container version. For example, 23.07 for July 2023 release:

docker run --gpus all -it --rm<xx.xx>-py3

Running JupyterLab and examples

To start JupyterLab from the container and view all the included examples:

docker run --gpus all -it --rm -p 8888:8888<xx.xx>-py3 bash -c 'source /usr/local/nvm/ && jupyter lab'

You might want to pull in your own data or persist code outside the DGL container. The easiest method is to mount one or more host directories as Docker bind mounts so your code changes persist.

We also have a GraphSAGE training example:

cd examples/graphsage
python3 --dataset cora --gpu 0

If you are looking for examples from DGL, you can find them in /opt/dgl/dgl-source/

Documentation and resources

Our documentation and release notes can be found here.

Ethical AI

NVIDIA's platforms and application frameworks enable developers to build a wide array of AI applications. Consider potential algorithmic bias when choosing or creating the models being deployed. Work with the model's developer to ensure:

  • The model meets the requirements for the relevant industry and use case
  • The necessary instruction and documentation are provided to understand error rates, confidence intervals, and results
  • The model is being used under the conditions and in the manner intended.